(navigation image)
Home American Libraries | Canadian Libraries | Universal Library | Community Texts | Project Gutenberg | Biodiversity Heritage Library | Children's Library | Additional Collections
Search: Advanced Search
Anonymous User (login or join us)
Upload
See other formats

Full text of "Thermal physics"

Philip M. Morse 








r> 



M 



I 



UNIVERSITY 
OF FLORIDA 
LIBRARIES 




PHYSICS 

SEMINAR 



:ngineering and fhysics 



LI ERARY 



5358 



Thermal 
Physics 



THERMAL 
PHYSICS 

A Preliminary Edition 



Philip M. Morse 

Professor of Physics 

Massachusetts Institute of Technology 



W. A. Benjamin, Inc. 
New York 1962 



<3 i$fcf / 

ENGINEERING f 

THERMAL PHYSICS * PHYS ' CS 

A Preliminary Edition i,BRARy ? 

Copyright © 1961 by W. A. Benjamin, Inc. 
All rights reserved. 

Library of Congress Catalog Card Number: 61-18590 
Manufactured in the United States of America 



The manuscript ivas received 

on August 25, 1961, and published 

on January 25, 1962 



W. A. BENJAMIN, INC. 

2465 Broadway, New York 25, New York 



Preface 



This book represents a stage in the process of revising the course 
in heat and thermodynamics presented to physics undergraduates at 
the Massachusetts Institute of Technology. It is, in its small way, a 
part of the revision of undergraduate curriculum in physics which has 
recently been going on at the Institute and at other centers of physics 
in this country. Such major revisions must be made at least once a 
generation if we are to compress the expanding field of physics into 
a form that can be assimilated by the physics student by the time he 
j completes his graduate education. 

Such a downward compaction of new physics, introducing it earlier 
in the education of a physicist, is not a task that can be undertaken 
. one subject at a time. For example, the basic techniques of classical 
; dynamics and electromagnetic theory could not now be taught effec- 
tively to juniors or seniors unless they were already fluently conver- 
^- sant with the differential calculus. This particular revision was ac- 
complished, in this country, about a generation ago; prior to that time 
graduate students had to unlearn the geometric tricks they had learned 
as undergraduates, before they could assimilate the methods of La- 
grange and Hamilton. 

Now it is necessary to bring the concepts of the quantum theory 
into the undergraduate curriculum, so the graduate student does not 
V~) have to start over again when he takes his first graduate course in 
-^ atomic or nuclear physics. Again the revision must be thorough, from 

the content of the freshman courses in physics and chemistry to the 
^ choice of topics in electromagnetic theory and dynamics. Unless the 
student becomes familiar with quantum theory and with the parts of 
classical theory relevant to quantum theory while an undergraduate, 
he has not the time to achieve a general understanding of modern 
physics as a graduate student. 

Perhaps this task of compression will eventually become impossi- 
ble; perhaps we shall have to give up educating physicists and resign 
■ l ourselves to educating nuclear physicists or solid-state physicists or 

" 

i 



vi PREFACE 

the like. This has happened in other branches of science. It would 
inevitably happen here if the task were simply that of compression of 
a growing mass of disparate observed facts. But physics has so far 
been fortunate enough to have the scope of its theoretical constructs 
expand nearly as rapidly as the volume of its experimental data. Thus 
it has so far not been necessary, for example, to have to teach two 
subjects in the time previously devoted to one; it has instead been 
possible to teach the concepts of a new theory which encompasses 
both of the earlier separate subjects, concentrating on conceptual un- 
derstanding and relegating details to later specialist subjects. The 
range of coverage of each course has increased; by and large the 
number of concepts has not. 

Of course the newer, more- inclusive theories embody more- 
sophisticated concepts and a wider range of mathematical techniques. 
So it is not an easy job to make them understandable to the undergrad- 
uate. However, the task is not yet impossible, in this writer's opinion. 

Statistical physics is a case in point. It impinges on nearly all of 
modern physics and is basic for the understanding of many aspects, 
both experimental and theoretical. Classical thermodynamics is in- 
adequate for many of the applications; to be useful in the areas of 
present active research it must be combined with quantum theory via 
the concepts of statistical mechanics. And this enrichment of thermo- 
dynamics should be included in the undergraduate course, so that the 
student can apply it in his graduate courses from the first. 

Such a course needs thorough replanning, both as to choice of ma- 
terial and order. Topics must be omitted to make room for new items 
to be added. The problem is not so much the number of different con- 
cepts to be taught as the abstractness and sophistication of these con- 
cepts. The thermodynamic part has to be compressed, of course, but 
not at the price of excluding all variables except P, V, and T. Engi- 
neering applications have to be omitted and special cases can be rel- 
egated to problems. Because of its importance in statistical mechan- 
ics, entropy should be stressed more than might be necessary in a 
course of thermodynamics alone. The kinetic theory part can be used 
as an introduction to the concepts of statistical mechanics, tying the 
material together with the Boltzmann equation, which recently became 
important in plasma physics. In statistical mechanics the effort must 
be to unify the point of view, so that each new aspect does not seem 
like a totally new subject. More needs to be done than is embodied in 
this text, but the class response so far has indicated that ensembles 
and partition functions are not necessarily beyond the undergraduate. 

As usual, the problem is best solved experimentally, by trying out 
various ways of presentation to see which ones lead the student most 
easily to the correct point of view— which exposition brings the sparks 
of interest and understanding from the class. The problem would not 
be soluble, of course, if the students do not already possess some 



PREFACE vii 

knowledge of atomic structure and the basic concepts of quantum me- 
chanics provided by a prior course in atomic structure that did not 
stop at Bohr orbits. 

The author has been working at this pedagogical problem, off and 
on, for about five years, first taking recitation sections and, for the 
past two years, giving the lectures in the spring-semester senior 
course in thermodynamics and statistical mechanics for physics ma- 
jors at the Institute. The present text is based on the lecture notes of 
the past term. Obviously the presentation has not yet reached its final 
form, ready for embalming in hard covers. But the results so far are 
encouraging, both in regard to interest aroused in the students and to 
concepts assimilated (at least long enough to use them in the final ex- 
amination!). It is rated as a stiff course, but in the writer's experi- 
ence this has never hurt a course's popularity; boredom is shunned, 
not work. 

The writer is indebted to the subjects of his experimentation, the 
roughly 300 students who have attended his lectures during the past 
two years. Their interest, their questions, and their answers on ex- 
amination papers have materially influenced the choice of subject 
matter and its manner of presentation. He is also particularly in- 
debted to Professors L. Tisza and L. C. Bradley, who have read and 
commented on various parts of the material, and to Mr. Larry Zamick 
who has painstakingly checked over the present text. They are to be 
thanked for numerous improvements and corrections; they should not 
be blamed for the shortcomings that remain. 

PHILIP M. MORSE 

Cambridge, Massachusetts 
July 1961 



Contents 



Preface 

PART I. THERMODYNAMICS 1 

1 . Introduction 3 

Historical. Thermodynamics and statistical mechanics. 
Equilibrium states 

2. Heat, Temperature, and Pressure 7 

Temperature. Heat and energy. Pressure and atomic 
motion. Pressure in a perfect gas 

3. State Variables and Equations of State 13 

Extensive and intensive variables. Pairs of mechanical 
variables. The equation of state of a gas. Other equa- 
tions of state. Partial derivatives 

4. The First Law of Thermodynamics 23 

Work and internal energy. Heat and internal energy. 
Quasistatic processes. Heat capacities. Isothermal 
and adiabatic processes 

5. The Second Law of Thermodynamics 32 

Heat engines. Carnot cycles. Statements of the second 
law. The thermodynamic temperature scale 

6. Entropy 40 

A thermal- state variable. Reversible processes. Irre- 
versible processes. Entropy of a perfect gas. The Joule 
experiment. Entropy of a gas. Entropy of mixing 

7. Simple Thermodynamic Systems 51 

The Joule-Thomson experiment. Black-body radiation. 
Paramagnetic gas 



Vlll 



CONTENTS ix 

8. The Thermodynamic Potentials 59 

The internal energy. Enthalpy. The Helmholtz and 
Gibbs functions. Procedures for calculation. Examples 
and useful formulas 

9. Changes of Phase 68 

The solid state. Melting. Clausius-Clapeyron equation. 
Evaporation. Triple point and critical point 

10. Chemical Reactions 78 

Chemical equations. Heat evolved by the reaction. Re- 
actions in gases. Electrochemical processes 

PART II. KINETIC THEORY 85 

11. Probability and Distribution Functions 87 

Probability. Binomial distribution. Random walk. The 
Poisson distribution. The normal distribution 

12. Velocity Distributions 96 

Momentum distribution for a gas. The Maxwell distri- 
bution. Mean values. Collisions between gas molecules 

13. The Maxwell- Boltzmann Distribution 103 

Phase space. The Boltzmann equation. A simple exam- 
ple. A more general distribution function. Mean energy 
per degree of freedom. A simple crystal model. Mag- 
netization and Curie's law 

14. Transport Phenomena 116 

An appropriate collision function. Electric conductivity 
in a gas. Drift velocity. Diffusion 

15. Fluctuations 122 

Equipartition of energy. Mean- square velocity. Fluc- 
tuations of simple systems. Density fluctuations in a 
gas. Brownian motion. Random walk. The Langevin 
equation. The Fokker- Planck equations 

PART III. STATISTICAL MECHANICS 137 

16. Ensembles and Distribution Functions 139 

Distribution functions in phase space. Liouville's theo- 
rem. Quantum states and phase space 

17. Entropy and Ensembles 147 

Entropy and information. Information theory. Entropy 
for equilibrium states. Application to a perfect gas 



x CONTENTS 

18. The Microcanonical Ensemble 154 

Example of a simple crystal. Microcanonical ensemble 
for a perfect gas. The Maxwell distribution 

19. The Canonical Ensemble 163 

Solving for the distribution function. General properties 
of the canonical ensemble. The effects of quantization. 
The high-temperature limit 

20. Statistical Mechanics of a Crystal 170 

Normal modes of crystal vibration. Quantum states for 
the normal modes. Summing over the normal modes. 
The Debye formulas. Comparison with experiment 

21. Statistical Mechanics of a Gas 181 

Factoring the partition function. The trans lational fac- 
tor. The indistinguishability of molecules. Counting 
the system states. The classical correction factor. The 
effects of molecular interaction. The Van der Waals 
equation of state 

22. A Gas of Diatomic Molecules 195 

The rotational factor. The gas at moderate tempera- 
tures. The vibrational factor 

23. The Grand Canonical Ensemble 202 

An ensemble with variable N. The grand partition func- 
tion. The perfect gas once more. Density fluctuations 
in a gas 

24. Quantum Statistics 208 

Occupation numbers. Maxwell- Bo ltzmann particles. 
Bosons and fermions. Comparison among the three 
statistics. Distribution functions and fluctuations 

25. Bose-Einstein Statistics 220 

General properties of a boson gas. Classical statistics 
of black-body radiation. Statistical mechanics of a pho- 
ton gas. Statistical properties of a photon gas. Statis- 
tical mechanics of a boson gas. Thermal properties of 
a boson gas. The degenerate boson gas 

26. Fermi-Dirac Statistics 223 

General properties of a fermion gas. The degenerate 
fermion gas. Behavior at intermediate temperatures 

27. Quantum Statistics for Complex Systems 238 

Wave functions and statistics. Symmetric wave functions. 
Antisymmetric wave functions. Wave functions and equi- 
librium states. Electrons in a metal. Ortho- and para- 
hydrogen 



CONTENTS xi 

References 249 

Problems 250 

Constants 268 

Glossary 270 

Index 272 




THERMODYNAMICS 




Introduction 



The subject matter of this book, thermodynamics, kinetic theory, 
and statistical mechanics, constitutes the portion of physics having to 
do with heat. Thus it may be called thermal physics. Since heat and 
other related properties, such as pressure and temperature, are char- 
acteristics of aggregates of matter, the subject constitutes a part of 
the physics of matter in bulk. Its theoretical models, devised to cor- 
respond with the observed behavior of matter in bulk (and to predict 
other unobserved behavior), rely on the mathematics of probability 
and statistics; thus the subject may alternately be called statistical 
physics. 

Historical 

The part of the subject called thermodynamics has had a long and 
controversial history. It began at the start of the industrial revolution, 
when it became important to understand the relation between heat and 
chemical transformations and the conversion of heat into mechanical 
energy. At that time the atomic nature of matter was not yet under- 
stood and the mathematical model, developed to represent the relation 
between thermal and mechanical behavior, had to be put together with 
the guidance of the crude experiments that could then be made. Many 
initial mistakes were made and the theory had to be drastically revised 
several times. This theory, now called thermodynamics, concerns it- 
self solely with the macroscopic properties of aggregated matter, such 
as temperature and pressure, and their interrelations, without refer- 
ence to the underlying atomic structure. The general pattern of these 
interrelations is summarized in the laws of thermodynamics, from 
which one can predict the complete thermal behavior of any substance, 
given a relatively few empirical relationships, obtained by macro- 
scopic measurements made on the substance in question. 

In the latter half of the nineteenth century, when the atomic nature 
of matter began to be understood, efforts were made to learn how the 



THERMODYNAMICS 



macroscopic properties of matter, dealt with by thermodynamics, 
could depend on the assumed behavior of constituent atoms. The first 
successes of this work were concerned with gases, where the interac- 
tions between the atomic components were minimal. The results pro- 
vide a means of expressing the pressure, temperature, and other mac- 
roscopic properties of the gas in terms of average values of proper- 
ties of the molecules, such as their kinetic energy. This part of the 
subject came to be called kinetic theory. 

In the meantime a much more ambitious effort was begun by Gibbs 
in this country, and by Boltzmann and others in Europe, to provide a 
statistical correspondence between the atomic substructure of any 
piece of matter and its macroscopic behavior. Gibbs called this the- 
ory statistical mechanics. Despite the fragmentary knowledge of 
atomic physics at the time, statistical mechanics was surprisingly 
successful from the first. Since then, of course, increased atomic 
knowledge has enabled us to clarify its basic principles and extend its 
techniques. It now provides us with a means of understanding the laws 
of thermodynamics and of predicting the various relations between 
thermodynamic variables, hitherto obtained empirically. 

Thermodynamics and Statistical Mechanics 

Thus thermodynamics and statistical mechanics are mutually com- 
plementary. For example, if the functional relationship between the 
pressure of a gas, its temperature, and the volume it occupies is 
known, and if the dependence of the heat capacity of the gas on its 
temperature and pressure has been determined, then thermodynamics 
can predict how the temperature and pressure are related when the 
gas is isolated thermally, or how much heat it will liberate when com- 
pressed at constant temperature. Statistical mechanics, on the other 
hand, seeks to derive the functional relation between pressure, vol- 
ume, and temperature, and also the behavior of the heat capacity, in 
terms of the properties of the molecules that make up the gas. 

In this volume we shall first take up thermodynamics, because it is 
more obviously related to the gross physical properties we wish to 
study. But we shall continue to refer back to the underlying micro- 
structure, by now well understood, to remind ourselves that the ther- 
modynamic variables are just another manifestation of atomic behav- 
ior. In fact, because it does not make use of atomic concepts, thermo- 
dynamics is a rather abstract subject, employing sophisticated con- 
cepts, which have many logical interconnections; it is not easy to un- 
derstand one part until one understands the whole. In such a case it is 
better pedagogy to depart from strict logical presentation. Hence 
several derivations and definitions will be given in steps, first pre- 
sented in simple form and, only after other concepts have been intro- 
duced, later re- enunciated in final, accurate form. 



INTRODUCTION 5 

Part of the difficulty comes from the fact, more apparent now than 
earlier, that the thermodynamic quantities such as temperature and 
pressure are aggregate effects of related atomic properties. In ther- 
modynamics we assume, with considerable empirical justification, 
that whenever a given amount of gas, in a container of given volume, 
is brought to a given temperature, its pressure and other thermody- 
namic properties will take on specific values, no matter what has been 
done to the gas previously. By this we do not mean that when the gas 
is brought back to the same temperature each molecule of the gas re- 
turns to the same position and velocity it had previously. All we mean 
is that the average effects of all the atoms return to their original val- 
ues, that even if a particular molecule does not return to its previous 
position or velocity, its place will have been taken by another, so that 
the aggregate effect is the same. 

To thus assume that the state of a given collection of atoms can be 
at all adequately determined by specifying the values of a small num- 
ber of macroscopic variables, such as temperature and pressure, 
would at first seem to be an unworkable oversimplification. Even if 
there were a large number of different configurations of atomic posi- 
tions and motions which resulted in the same measurement of tempera- 
ture, for example, there is no a priori reason that all, or even most, 
of these same configurations would produce the same pressure. What 
must happen (and what innumerable experiments show does happen) is 
that a large number of these configurations do produce the same pres- 
sure and that thermodynamics has a method of distinguishing this sub- 
set of configurations from others, which do not produce the same 
pressure. The distinguishing feature of the favored subset is embodied 
in the concept of the equilibrium state. 

Equilibrium States 

A detailed specification of the position, velocity, and quantum state 
of each atom in a given system is called a microstate of the system . 
The definition is useful conceptually, not experimentally, for we can- 
not determine by observation just what microstate a system is in at 
some instant, and we would not need to do so even if we could. As we 
said before, many different microstates will produce the same macro- 
scopic effects; all we need to do is to find a method of confining our 
attention to that set of microstates which exhibits the simple relations 
between macroscopic variables, with which thermodynamics concerns 
itself. 

Consider for a moment all those microstates of a particular gas 
for which the total kinetic energy of all the molecules is equal to some 
value U. Some of the microstates will correspond to the gas being in 
a state of turbulence, some parts of the gas having net momentum in 
one direction, some in another. But a very large number of micro- 



6 THERMODYNAMICS 

states will correspond to a fairly uniform distribution of molecular 
kinetic energies and directions of motion, over all regions occupied 
by the gas. In these states, which we shall call the equilibrium micro- 
states, we shall find that the temperature and pressure are fairly uni- 
form throughout the gas. It is a fact, verified by many experiments, 
that if a gas is started in a microstate corresponding to turbulence, it 
will sooner or later reach one of the equilibrium microstates, in which 
temperature and pressure are uniform. From then on, although the 
system will change from microstate to microstate as the molecules 
move about and collide, it will confine itself to equilibrium micro- 
states. To put it in other language, although the gas may start in a 
state of turbulence, if it is left alone long enough internal friction will 
bring it to that state of thermodynamic quiescence we call equilibrium, 
where it will remain. 

Classical thermodynamics only deals with equilibrium states of a 
system, each of which corresponds to a set of indistinguishable micro- 
states, indistinguishable because the temperature, the pressure, and 
all the other applicable thermodynamic variables have the same val- 
ues for each microstate of the set. These equilibrium states are 
reached by letting the system settle down long enough so that quanti- 
ties such as temperature and pressure become uniform throughout, so 
that the system has a chance to forget its past history, so to speak. 

Quantities, such as pressure and temperature, which return to the 
same values whenever the system returns to the same equilibrium 
state, are called state variables. A thermodynamic state of a given 
system is thus completely defined by specifying the values of a rela- 
tively few state variables (which then become the independent varia- 
bles), whereupon the values of all the other applicable state variables 
(the dependent variables) are uniquely determined. Dependent state 
variable U is thus specified as a function U(x,y, ..., z) of the inde- 
pendent variables x, ..., z, where it is tacitly understood that the 
functional relationship only holds for equilibrium states. 

It should be emphasized that the fact that there are equilibrium 
states, to which matter in bulk tends to approach spontaneously if left 
to itself, and the fact that there are thermodynamic variables which 
are uniquely specified by the equilibrium state (independent of the past 
history of the system) are not conclusions deduced logically from some 
philosophical first principles. They are conclusions ineluctably drawn 
from more than two centuries of experiments. 




Heat, Temperature, 
and Pressure 



In our introductory remarks we have used the words temperature 
and pressure without definition, because we could be sure the reader 
had encountered them earlier. Before long, however, these quantities 
must be defined, by describing how they are measured and also by in- 
dicating how they are related to each other and to all the other quanti- 
ties that enter into the theoretical construct we call thermodynamics. 
As mentioned earlier, this construct is so tightly knit that an adequate 
definition of temperature involves other concepts and quantities, them- 
selves defined in terms of temperature. Thus a stepwise procedure is 
required. 

Temperature 

The first step in a definition of temperature, for example, is to re- 
fer to our natural perception of heat and cold, noting that the property 
seems to be one- dimensional, in that we can arrange objects in a one- 
dimensional sequence, from coldest to hottest. Next we note that many 
materials, notably gases and some liquids, perceptibly expand when 
heated; so we can devise and arbitrarily calibrate a thermometer. The 
usual calibration, corresponding to the centigrade scale, sets 0° at 
the temperature of melting ice and 100° at the temperature of boiling 
water, and makes the intermediate scale proportional to the expansion 
of mercury within this range. We use such a thermometer to provide 
a preliminary definition of temperature, T, until we have learned 
enough thermodynamics to understand a better one. 

Next we note, by experimenting on our own or by taking the word of 
other experimenters, that when a body comes to equilibrium its tem- 
perature is uniform throughout. In fact we soon come to use uniform- 
ity of temperature as an experimental criterion of equilibrium. 

When we turn to the question of the cause of change of temperature 
the usual answer, that a rise in temperature is caused by the addition 
of heat, has no scientific meaning until we define heat. And thus we 



THERMODYNAMICS 



arrive at the source of many of the mistakes made in the early devel- 
opment of thermodynamics. Heat is usually measured by measuring 
the rise in temperature of the body to which the heat is added. This 
sounds like a circular definition: The cause of temperature rise is 
heat, which is measured by the temperature rise it causes. Actually 
it is something more, for it assumes that there is something unique 
called heat, which produces the same temperature change no matter 
how the heat is produced. Heat can be generated by combustion, by do- 
ing work against friction, by sending electric current through a resis- 
tor, or by rapidly compressing a gas. The amount of heat produced in 
each of these ways can be measured in terms of the temperature rise 
of a standard body and the effects are proportional; twice the combus- 
tion producing twice the temperature rise, for example. 

The early measurements of heat were all consistent with the theory 
that heat is a "fluid" similar to the "electric fluid" which was also 
being investigated at that time. The heat fluid was supposed to be 
bound to some atoms in the body; it could be detached by pressure, 
friction, or combustion; in its free state it would affect thermometers. 
It seemed at first that the amount Q of "free heat fluid" present in a 
body should be a thermodynamic state variable such as pressure or 
temperature, a definite function of the independent variables that define 
the equilibrium state. The Q for a particular amount of gas was sup- 
posed to be a specific function of the temperature and pressure of the 
gas, for instance. 

Later, however, it was demonstrated that heat is just one manifes- 
tation of the energy possessed by a body, that heat could be trans- 
formed into mechanical energy and vice versa. Historically, this 
change in theory is reflected by the change in the units used to meas- 
ure heat. At first the unit was the kilogram -calorie, the amount of 
heat required to raise a kilogram of water from 4 to 5°C. More re- 
cently heat has been measured in terms of the usual units of energy, 
the joule in the mks system of units. Careful measurement of the en- 
ergy lost to friction, or that lost in passing current through a resistor, 
together with the resulting temperature rise in water placed in ther- 
mal contact, shows that a kilogram -calorie of heat is equal to 4182 
joules. 

Heat and Energy 

As soon as we realize that heat is just a particular manifestation 
of the energy content of a body, we see that Q cannot be a state varia- 
ble. For we can add energy to a body in the form of heat and then take 
it away in the form of mechanical energy, bringing the body back to its 
initial equilibrium state at the end of such a cycle of operation. If heat 
were a state variable, as much would have to be given off during the 
cycle as was absorbed for Q to come back to its original value at the 



HEAT, TEMPERATURE, AND PRESSURE 9 

end of the cycle. But if heat can be changed to work, the net amount of 
heat added during the cycle may be positive, zero, or negative, depend- 
ing on the net amount of work done by or on the body during the cycle. 
The quantity which is conserved, and which thus is the state variable, 
is U, the energy possessed by the body, which can be drawn off either 
as heat or as mechanical work, depending on the circumstances. As 
we shall see more clearly later, heat represents that energy content 
of the body which is added or removed in disorganized form; work is 
the energy added or removed in organized form; within certain limits 
disorganization can be changed to organization and vice versa. 

We can increase the energy of a body by elevating it, doing work 
against the force of gravity. This increase is in potential energy, 
which is immediately available again as work. The temperature of the 
body is not changed by the elevation, so its heat content is not changed. 
We can translate the potential energy into organized kinetic energy by 
dropping the body; this also makes no change in its heat content. But 
if the body, in its fall, hits the ground, the organized motion of the 
body is changed to disorganized internal vibrations; its temperature 
rises; heat has been produced. In a sense, the reason that classical 
thermodynamics is usually limited to a study of equilibrium states is 
because an equilibrium state is the state in which heat energy can 
easily and unmistakably be distinguished from mechanical energy. 
Before equilibrium is reached, sound waves or turbulence may be 
present, and it is difficult to decide when such motion ceases to be 
"mechanical" and becomes "thermal." 

A preliminary definition of pressure can be more quickly achieved, 
for pressure is a mechanical quantity; we have encountered it in hy- 
dromechanics and acoustics. Pressure is a simple representative of 
a great number of internal stresses which can be imposed on a body, 
such as tensions or torques or shears, changes in any of which repre- 
sent an addition or subtraction of mechanical energy to the body. 
Pressure is more usually encountered in thermodynamic problems; it 
is the only stress that a gas can sustain in equilibrium. The usual 
units of pressure are newtons per square meter, although the atmos- 
phere (about 10 5 newtons per square meter) is also used. 

Pressure and Atomic Motion 

The pressure exerted by a gas on its container walls is a very good 
example of a mechanical quantity which is the resultant of the random 
motions of the gas molecules and which nonetheless is a remarkably 
stable function of a relatively small number of state variables. To il- 
lustrate this point we shall digress for a few pages into a discussion of 
kinetic theory. The pressure P on a container wall is the force ex- 
erted by the gas, normal to an area dA of the wall, divided by dA. 
This force is caused by the collisions of the gas molecules against the 



10 THERMODYNAMICS 

area dA. Each collision delivers a small amount of momentum to dA; 
the amount of momentum delivered per second is the force P dA. 

Let us assume a very simplified model of a gas, one consisting of 
N similar atoms, each of mass m and of "negligible" dimensions, 
with negligible interactions between them so that the sole energy of 
the ith atom is its kinetic energy of translation, (l/2)m(v| x +v| y + v| z ) 
The gas is confined in a container of internal volume V, the walls of 
which are perfect reflectors for incident gas atoms. By "negligible 
dimensions" we mean the atoms are very small compared to the mean 
distance of separation, so collisions are very rare and most of the 
time each atom is in free motion. We also mean that we do not need 
to consider the effects of atomic rotation. We shall call this simple 
model a perfect gas of point atoms. 

Next we must ask what distribution of velocities and positions of 
the N atoms in volume V corresponds to a state of equilibrium. As 
the atoms rebound from the walls and from each other, they cannot 
lose energy, for the collisions are elastic. In collisions between the 
atoms what energy one loses the other gains. The total energy of 
translational motion of all the atoms, 

U=|m S(v 2 ix + v 2 iy + v? z ) = |mSv 2 i = N<K.E.> tran (2-1) 

is constant. The last part of this set of equations defines the average 
kinetic energy <K.E.> tran of translation of an atom of the gas as be- 
ing U divided by the number of atoms N. (The angular brackets < > 
will symbolize average values.) 

As the gas settles down to equilibrium , U does not change but the 
randomness of the atomic motion increases. At equilibrium the atoms 
will be uniformly distributed throughout the container, with a density 
(N/V) atoms per unit volume; their velocities will also be randomly 
distributed, as many moving in one direction as in another. Some at- 
oms are going slowly and some rapidly, of course, but at equilibrium 
the total x component of atomic momentum, Zmvj x , is zero; simi- 
larly with the total y and z components of momentum. The total x 
component of the kinetic energy, (l/2)2mv? x , is not zero, however. 
At equilibrium it is equal to the total y component and to the total z 
component, each of which is equal to one -third of the total kinetic en- 
ergy, according to Eq. (2-1), 

V \ m V ix = \ N< K ' E ' >tr an (2 " 2) 

At equilibrium all directions of motion are equally likely. 



HEAT, TEMPERATURE, AND PRESSURE 



11 



Pressure in a Perfect Gas 

Next we ask how many atoms strike the area <±A of container wall 
per second. For simplicity we orient the axes so that the positive x 
axis points normally into dA (Fig. 2-1). Consider first all those atoms 




>- z 



FIG. 2-1. Reflection of atoms, with velocity v, from area 
element dA in the yz plane. 



in V which have their x component of velocity, vix, equal to some 
value v x (v x must be positive if the atom is to hit dA). All these kinds 
of atoms, which are a distance v x from the wall or closer, will hit the 
wall in the next second, and a fraction proportional to dA of those will 
hit dA in the next second. In fact a fraction (v x /V) dA of all the 
atoms in V which have x component of velocity equal to v x will hit 
dA per second. Each of these atoms, as it strikes dA during the sec- 
ond, rebounds with an x component -v x , so each of these atoms im- 
parts a momentum 2mv x to dA. Thus the momentum imparted per 
second by the atoms with x velocity equal to v x is 2mv x (v x / V) dA 
times the total number of atoms in V having x velocity equal to v x . 
And therefore the total momentum imparted to dA per second is the 
sum of (2mv? / V) dA for each atom in V that has a positive value 

of Vix. 

Since half the atoms have a positive value of v^, the total momen- 
tum imparted to dA per second is 



12 THERMODYNAMICS 



N 



2 jp! (2mv| x /V) dA = | (N/V) < K.E . > tran dA = | (U/ V) dA 

(2-3) 

where we have used Eq. (2-2) to express the result in terms of the 
mean atomic kinetic energy, defined in Eq. (2-1). Since this is equal 
to the force P dA on dA, we finally arrive at an equation relating the 
pressure P of a perfect gas of point atoms in a volume V, in terms 
of the mean kinetic energy <K.E.>f- ran of translation per atom or in 
terms of the total energy content of the gas per unit volume (U/V) 
(total as long as the only energy is kinetic energy of translation, that 
is): 

P=|(U/V) or PV= |u=|N<K.E.> tran (2-4) 

This is a very interesting result, for it demonstrates the great 
stability of the relationships between aggregate quantities such as P 
and U for systems in equilibrium. The relationship of Eq. (2-4) holds 
no matter what the distribution in speed the atoms have as long as 
their total energy is U, as long as the atoms are uniformly distrib- 
uted in space, and as long as all directions of motion are equally likely 
(i.e., as long as the gas is in equilibrium). Subject to these provisos, 
every atom could have kinetic energy <K.E.>tran> or na ^ °* them 
could have kinetic energy (l/2)<K.E.>t ran and the other half energy 
(3/2)<K.E.>t ran , or any other distribution having an average value 
<K.E.> tran . As long as it is uniform in space and isotropic in direc- 
tion the relation between P, V, and N<K.E.>t ran is that given in Eq. 
(2-4). Even the proportionality constant is fixed; PV is not just pro- 
portional to N<K.E.>j- ran — the factor is 2/3, no matter what the ve- 
locity distribution is. 

From our earlier discussion we may suspect that <K.E.>t ran is a 
function of the gas temperature T; if T is increased, the kinetic en- 
ergy of the gas atoms should increase. We shall see what this relation- 
ship is in the next chapter. 




State Variables and 
Equations of State 

To recapitulate, when a thermodynamic system is in a certain equi- 
librium state, certain aggregate properties of the system, such as 
pressure and temperature, called state variables, have specific values, 
determined only by the state and not by the previous history of the sys- 
tem. Alternately, specifying the values of a certain number of state 
variables specifies the state of the system; the number of variables, 
required to specify the state uniquely, depends on the system and on 
its constraints. For example, if the system is a definite number of 
like molecules in gaseous form within a container, then only two vari- 
ables are needed to specify the state— either the pressure of the gas 
and the volume of the container, or the pressure and temperature of 
the gas, or else the volume and temperature. If the system is a mix- 
ture of two gases (such as hydrogen and oxygen) which react chemi- 
cally to form a third product (such as water vapor), the relative abun- 
dance of two of the three possible molecular types must be specified, 
in addition to the total pressure and volume (or P and T, or T and 
V), to determine the state. If the gas is paramagnetic, and we wish to 
investigate its therm omagnetic properties, then the strength 3C of the 
applied magnetic field (or else the magnetic polarization of the gas) 
must also be specified. 

Extensive and Intensive Variables 

One state variable is simply the amount of each chemical substance 
present in the system. The convenient unit for this variable is the 
mole; 1 mole of a substance, which has molecular weight M, is M 
kilograms of the substance (1 mole of hydrogen gas is 2 kg of H 2 , 1 
mole of oxygen is 32 kg of 2 ). By definition, each mole contains the 
same number of molecules, N = 6 x 10 26 , called Avogadro's number. 
In many respects a mole of gas behaves the same, no matter what its 
composition. When the thermodynamic system is made up of a single 
substance then the number of moles present (which we shall denote by 

13 



14 



THERMODYNAMICS 



the letter n) is constant. But if the system is a chemically reacting 
mixture the n's for the different substances may change. 

State variables are of two sorts, one sort directly proportional to 
n, the other not. For example, suppose we have two equal amounts of 
the same kind of gas, each of n moles and each in equilibrium at the 
same temperature T in containers of equal volume V. We then con- 
nect the containers so the two samples of gas can mix. The combined 
system now has 2n moles of gas in a volume 2V, and the total inter- 
nal energy of the system is twice the internal energy U of each origi- 
nal part. But the common temperature T and pressure P of the mixed 
gas have the same values they had in the original separated states. 
Variables of the former type, proportional to n (such as U and V), 
are called extensive variables ; those of the latter type (such as T and 
P) are called intensive variables. At thermodynamic equilibrium the 
intensive variables have uniform values throughout the system. 

A basic state variable for all thermodynamic systems (almost by 
definition) is its temperature T, which is an intensive variable. At 
present we have agreed to measure its value by a thermometer; a 
better definition will be given later. Related to the temperature is the 
heat capacity of the system, the heat required to raise the system 1 
degree in temperature. Because heat is not a state variable, the 
amount of heat required depends on the way the heat is added. For 
example, the amount of heat required to raise T by 1 degree, when 
the volume occupied by the system is kept constant, is called the heat 




(T/0) 

FIG. 3-1. Heat capacity at constant volume C v of a solid, in 
units of 3nR, where R is the gas constant [see Eq. 
(3-1)] as a function of temperature, in units of the 
characteristic temperature of the solid (see Fig. 
20-1). 



STATE VARIABLES AND EQUATIONS OF STATE 15 

capacity at constant volume and is denoted by C v . The heat required 
to raise T 1 degree when the pressure is kept constant is called the 
heat capacity at constant pressure and is denoted C p . A system at 
constant pressure usually expands when heated, thus doing work, so 
Cp is usually greater than C v . 

These heat capacities are state variables, in fact they are extensive 
variables; their units are joules per degree. The capacities per mole 
of substance, c v = (C v /n) and c p = (Cp/n), are called specific heats, 
at constant volume or pressure, respectively. They have been meas- 
ured, for many materials, and a number of interesting regularities 
have emerged. For example, the specific heat at constant volume, c v , 
for any monatomic gas is roughly equal to 12,000 joules per degree 
centigrade per mole, independent of T and P over a wide range of 
states, whereas c v for diatomic gases is roughly 20,000 joules per 
degree centigrade per mole, with a few exceptions. A typical plot of 
c v for a solid is given in Fig. 3-1, showing that c v is independent of 
T for solids only when T is quite large (there are more exceptions to 
this rule with solids than with gases). 

Pairs of Mechanical Variables 

Other state variables are of a mechanical, rather than thermal, 
type. For example, there is the pressure P (in newtons per square 
meter), an intensive variable appropriate for fluids, although applica- 
ble also for solids that are uniformly compressed (in general, in sol- 
ids, one needs a tensor to describe the stress). Related to P is the 
extensive variable V (in cubic meters), the volume occupied by the 
system. The pair define a mechanical energy; work P dV (in joules) 
is done by the system on the container walls if its volume is increased 
by dV when it is in equilibrium at pressure P. The pair P and V are 
the most familiar of the mechanical state variables. For a bar of ma- 
terial, the change in dimensions may be simple stretching, in which 
case the extensive variable would be the length L of the bar, the in- 
tensive variable would be the tension J, and the work done on the bar 
by stretching it an additional amount dL would be J dL. 

Or possibly the material may be polarized by a magnetic field. The 
intensive variable here is the impressed magnetic intensity 3C (in am- 
pere turns per meter) and the extensive variable is the total magneti- 
zation an of the material. Reference to a text on electromagnetic the- 
ory will remind us that, related to 3C, there is the magnetic induction 
field ca> (in webers per square meter). For paramagnetic material of 
magnetic susceptibility x» a magnetic field causes a polarization (P of 
the material, which is related to .fC and (B through the equation 
(B = /x 3C(1 + x) = Mo( ,,JC+ (p ), where n is the permeability of vacuum 
47T x 10~ 7 henrys per meter. The total energy contributed to the mate- 
rial occupying volume V, by application of field x, is (1/2) jlx V 0C(P, 



16 THERMODYNAMICS 

exclusive of the "energy of the vacuum" (1/2) /j. V3C 2 , the total mag- 
netic energy being (1/2)3C(BV. 

Suppose we define the total magnetization of the body as being the 
quantity an = ii V (P (in weber-meters); for paramagnetic materials an 
would equal jll V~x3C. Then the magnetic work done on the body in mag- 
netic field 3C, when its magnetization is increased by d 9TC, would be 
JC dan, the integral of which, for an = jli Vx3C, becomes (1/2) jlx V3C(P, as 
desired. Magnetization on is thus the extensive variable related to oc. 
A similar pair of state variables can be devised for dielectrics and the 
electric field. 

There is also an intensive variable, related to the variable n, the 
number of moles of material in the system. If we add dn moles of 
material to the system we add energy \i dn, where ju. is called the 
chemical potential of the material. Its value can be measured by meas- 
uring the heat generated by a chemical reaction, as will be shown in 
Chapter 10. 

As we mentioned earlier, we need to determine experimentally a 
certain minimum number of relationships between the state variables 
of a system before the theoretical machinery of thermodynamics can 
"take hold" to predict the system's other thermal properties. One of 
these relationships is the dependence of one of the heat capacities, 
either Cy or Cp (or Cl or Cj, or C^ or Can depending on the me- 
chanical or electromagnetic variables of interest) on T and on P or V 
(or L or J, or on 3C or 911, as the case may be). We shall show later 
that, if C v is measured, Cp can be computed (and similarly for the 
other pairs of heat capacities); thus only one heat capacity needs to be 
measured as a function of the independent variables of the system. 

The Equation of State of a Gas 

Another necessary empirical relationship is the relation between 
the pair of mechanical variables P and V (or J and L, or 3C and 9Tl) 
and the temperature T for the system under study. Such a relation- 
ship, expressed as a mathematical equation, is called an equation of 
state. There must be an equation of state known for each pair of me- 
chanical variables of interest. We shall write down some of those of 
general interest, which will be used often later. 

Parenthetically it should be noted that although these relationships 
are usually experimentally determined, in principle it should be pos- 
sible to compute them from our present knowledge of atomic behavior, 
using statistical mechanics. 

The equation of state first to be experimentally determined is the 
one relating P, V, and T for a gas, discovered by Boyle and by 
Charles. The relation is expressed by the equation PV = nR(T + T ), 
where, if T is in degrees centigrade, constant R is roughly 8300 
joules per degree centigrade per mole and T is roughly 273 C. This 



STATE VARIABLES AND EQUATIONS OF STATE 17 

suggests that we change the origin of our temperature scale from 0°C 
to -273°C. The temperature measured from this new origin (called 
absolute zero) is called absolute temperature and is expressed in de- 
grees Kelvin. For T in degrees Kelvin this equation of state is 

PV = nRT (3-1) 

Actually this is only a rough approximation of the equation of state 
of an actual gas. Experimentally it is found to be a pretty good ap- 
proximation for monatomic gases, like helium. Moreover, the ratio 
PV/nT for any gas approaches the value 8315 joules per degree Kel- 
vin per mole as P/T approaches zero. Since a gas is ''most gassy" 
when the pressure is small and the atoms are far apart, we call Eq. 
(3-1) the perfect gas law and call any gas that obeys it a perfect gas. 
We could, alternately, use Eq. (3-1) as another (but not the final) defi- 
nition of temperature; T is equal to PV/nR for a perfect gas. 

We now are in a position to illustrate how kinetic theory can supple- 
ment an empirical thermodynamic formula with a physical model. In 
the previous chapter we calculated the pressure exerted by a gas of N 
point particles confined at equilibrium in a container of volume V. 
This should be a good model of a perfect gas. As a matter of fact Eq. 
(2-4) has a form remarkably like that of Eq. (3-1). All we need to do 
to make the equations identical is to set 

nRT = |u=|N<K.E.> tran 

The juxtaposition is most suggestive. We have already pointed out that 
N, the number of molecules, is equal to nN , where N , Avogadro's 
number, is equal to 6 x 10 26 for any substance. Thus we reach the re- 
markably simple result, that RT = (2/3)N <K.E.>t r an for an Y perfect 
gas. For this model, therefore, the average kinetic energy per mole- 
cule is proportional to the temperature, and the proportionality con- 
stant (3/2)(R/N ) is independent of the molecular mass. The ratio 
(R/N ) = 1.4 x 10" 23 joules per degree Kelvin is called k, the Boltz- 
mann constant. 

Thus the model suggests that for those gases which obey the perfect 
gas law fairly accurately, the average kinetic energy of molecular 
translation is directly proportional to the temperature, 

<K.E.> tran = -kT perfect gas (3-2) 

independent of the molecular mass. Only the kinetic energy of trans- 
lation enters into this formula; our model of point atoms assumed their 
rotational kinetic energy was negligible. We might expect that this 



18 THERMODYNAMICS 

would be true for actual monatomic gases, like helium and argon, and 
that for these gases the total internal energy is 

U = N<K.E.> tran = |NkT = |nRT (3-3) 

Measurement shows this to be nearly correct [see discussion of Eq. 
(6-11)] . For polyatomic gases U is greater, corresponding to the ad- 
ditional kinetic energy of rotation (the additional term does not enter 
into the equation for P, however). We shall return to this point, to 
enlarge on and to modify it, as we learn more. 

Other Equations of State 

Of course the equation of state for an actual gas is not as simple as 
Eq. (3-1). We could, instead of transforming our measurements into 
an equation, simply present the relationship between P, V, and T in 
the form of a table of numbers or a set of curves. But, as we shall 
see, the thermal behavior of bodies usually is expressed in terms of 
the first and second derivatives of the equation of state, and taking de- 
rivatives of a table of numbers is tedious and subject to error. It is 
often better to fit an analytic formula to the data, so we can differen- 
tiate it more easily. 

A formula that fits the empirical behavior of many gases, over a 
wider range of T and P than does Eq. (3-1), is the Van der Waals 
approximation, 

(V - nb)(p + ~) = nRT (3-4) 

For large- enough values of V this approaches the perfect gas law. 
Typical curves for P against V for different values of T are shown 
in Fig. 3-2. For temperatures smaller than (8a/27bR) there is a range 
of P and V for which a given value of pressure corresponds to three 
different values of V. This represents (as we shall see later) the 
transition from gas to liquid. Thus the Van der Waals formula covers 
approximately both the gaseous and liquid phases, although the accu- 
racy of the formula for the liquid phase is not very good. 

It is also possible to express the equation of state as a power series 
in (1/V), 



(nRT/ V) 



n _/_x . n 



l + ^B(T)+ li] C(T) + -| 



This form is called the virial equation and the functions B(T), C(T), 
etc., are called virial coefficients. Values of these coefficients and 



STATE VARIABLES AND EQUATIONS OF STATE 19 
0.2 



0.1 



1 





1 1 ' 


~~ 


1 \ \ t - (8/27) 












V <= ^ 






\ r n 25 




\ < ^^' 00 ^ m Q ?, , -- 




~ ™ ' ■ 


\</^ _ J -^""' — ^-~- — ■ " — 


1 


1 / /l 1 1 1 1 1 1 1 1 



(V/nb) 



10 



FIG. 3-2. The Van der Waals equation of state. Plots of b 2 P/a 
versus V/nb for different values of t = RbT/a. 
Point C is the critical point (see Fig. 9-3). 

their derivatives can then be tabulated or plotted for the substance 
under study. 

Corresponding equations of state can be devised for solids. A sim- 
ple one, which is satisfactory for temperatures and pressures that are 
not too large (below the melting point for T, up to several hundred at- 
mospheres for P), is 



V = V (l +j3T - kP) 



(3-6) 



Both 0, which is called the thermal expansion coefficient , and k, 
called the compressibility, are small quantities, of the order of 10" 6 
for metals, for example. They are not quite constant; they vary as T 
and P are changed, although the variation for most solids is not large 
for the usual range of T and P. 

The other pairs of mechanical variables also have their equations 
of state. For example, in a stretched rod the relation between tension 



20 THERMODYNAMICS 

J and length L and temperature T is, for stretches within the elastic 
limit, 

J = (A + BT)(L- L ) (3-7) 

where A, B, and L are constants (approximately); B is negative for 
many substances but positive for a few, such as rubber. 

Likewise there is a magnetic equation of state, relating magnetic 
intensity 3C, magnetization 3TC, and T. For paramagnetic materials, 
for example, Curie's law, 

m = (nD.TC/T) (3-8) 

is a fairly good approximation, within certain limits for T and 3C. The 
Curie constant D is proportional to the magnetic susceptibility of the 
substance. 

Partial Derivatives 

In all these equations there is a relationship between at least three 
variables. We can choose any pair of them to be the independent vari- 
ables; the other one is then a dependent variable. We shall often wish 
to compute the rate of change of a dependent variable with respect to 
one of the independent variables, holding the other constant. This rate, 
called a partial derivative, is discussed at length in courses in ad- 
vanced calculus. In thermodynamics, since we are all the time chang- 
ing from one pair of independent variables to another, we find it advis- 
able to label each partial by both independent variables, the one varied 
and the one held constant. The partial (3P/8V)rp, for example, is the 
rate of change of P with respect to V, when T is held constant; V 
and T are the independent variables in this case, and P is expressed 
explicitly as a function of V and T before performing the differenti- 
ation. 

There are a number of relationships between partial derivatives 
that we shall find useful. If z and u are dependent variables, func- 
tions of x and y, then, by manipulation of the basic equation 

dz = (3z/3x) y dx + (3z/8y) x dy 

we can obtain 



(3-9) 



The last equation can be interpreted as follows: On the left we express 
x as a function of y and z, on the right z is expressed as a function 



3z\ 


(du/ax)y _ 

(8u/3z) y 


1 


(-) = 

\ey/z 


(3z/3y)x 


(ax/az) y ' 


(dz/9x)y 



STATE VARIABLES AND EQUATIONS OF STATE 21 

of x and y before differentiating and the ratio is then reconverted to 
be a function of y and z to effect the equation. Each partial is itself 
a function of the independent variables and thus may also be differen- 
tiated. Since the order of differentiation is immaterial, we have the 
useful relationship 



L /iz\ "I . r_a_/Bz\ 



(3-10) 



As an example of the use of these formulas, we can find the partial 
(3V/3T)p, as function of P and V or of T and V, for the Van der 
Waals formula (3-4): 



\BTJ } 



(9P/9T) V nR/(V - nb) 

(3P/3V) T [nRT/(V - nb) 2 ] - (2an 2 /V 3 ) 

R(V - nb)V 3 



RTV 3 - 2an(V - nb)< 



We shall often be given the relevant partial derivatives of a state 
function and be required to compute the function itself by integration. 
If (3z/3x)y = f(x) and (3z/8y) x = g(y) this is straightforward; we inte- 
grate each partial separately and add 

z = ft (x) dx + / g(y) dy 

But if either partial depends on the other independent variable it is 
not quite so simple. For example, if (3z/8x) y = f(x) + ay and 
(8z/3y) x = ax, then the integral is 

z = Jf(x) dx + axy [not Jf(x) dx + 2axy] 

as may be seen by taking partials of z. The cross term appears in 
both partials and we include it only once in the integral. To thus co- 
alesce two terms of the integral, the two terms must of course be 
equal. This seems to be assuming more of a relationship between 
(az/3x)y and (az/3y) x than we have any right to do, until we remem- 
ber that they are related, according to Eq. (3-10), in just the right way 
so the cross terms can be coalesced. When this is so, the differential 
dz = (az/3x) y dx + (3z/8y) x dy is a perfect differential, which can be 
integrated in the manner just illustrated to obtain z, a function of x 
and y, the integrated value coming out the same no matter what path 
in the x,y plane we choose to perform the integration along, as long as 
the terminal points of the path are unchanged (Fig. 3-3). 

A differential dz = f(x,y) dx + g(x,y) dy, where (3f/8y) x is not equal 
to (3g/3x)y, results in an integral which depends on the path of integra- 



22 



THERMODYNAMICS 



tion as well as the end points, is called an imperfect differential, and is 
distinguished by the bar through the d. The integral of the perfect dif- 
ferential dz = y dx + x dy, from (0,0) to (a,b) over the path from (0,0) 




(a,0) 



FIG. 3-3. Integration in the xy plane. 

b a 

to (0,b) to (a,b) is • / dy + b • J dx = ab, which equals that for the path 


a b 

(0,0), (a,0), (a,b), 0-/ dx + a-/dy = ab. On the other hand, the integral 



from (0,0) to (a,b) of the imperfect integral d"z = y dx - x dy is ab 
over the first route and -ab over the second. Such a differential can- 
not be integrated to produce a state function z(x,y). However, we can 
multiply the imperfect differential d"z by an appropriate function of x 
and y (in this case 1/y 2 ), which will turn it into a perfect differential, 
du; in this case 



(CTz/y 2 ) = du = (1/y) dx - (x/y 2 ) dy 



and 



u = (x/y) 



The factor that converts an imperfect differential into a perfect one is 
called an integrating factor . One always exists (although it may be 
hard to find) for differentials of two independent variables. For more 
than two independent variables there are imperfect differentials for 
which no integrating factor exists. 




The First Law of 

Thermodynamics 



An important state function is the internal energy U of the system . 
For a perfect gas of point atoms, Eq. (3-3) indicates that U = (3/2)nRT, 
if T and V, or T and P, are the independent var iables ; U= (3/2)PV 
if P and V are. The internal energy of a system is an extensive vari- 
able. 

Work and Internal Energy 

The internal energy U can be changed by having the system do 
work dW against some externally applied force, or by having this 
force do work -dW on the system. For example, if the system is con- 
fined under uniform pressure, an increase in volume would mean that 
the system did work £IW = P dV; if the system is under tension J, it 
would require work -dW = J dL to be done on the system to increase 
its length dL. Similarly an increase in magnetization dan in the pres- 
ence of a field 3C will increase U by jc dan. Or, if dn moles of a sub- 
stance with chemical potential /i is added, U would increase by /i dn. 
In all these cases work dW is being done in an organized way by the 
system and U is increased by -dW. Note our convention, a positive 
dW is work done by the system, a negative value represents work 
done on the system, so that the change in U is opposite in sign toflW. 

Note also that we have been using the symbol of the imperfect dif- 
ferential for AW, implying that the amount of work done by the system 
depends on the path (i.e., on how it is done). For example, the work 
done by a perfect gas in going from state 1 of Fig. 4- 1 to state 2 dif- 
fers whether we go via path a or path b. Along path la, V does not 
change, so no work is done by or on the gas, although the temperature 
changes from T x = (P 1 V 1 /nR) to T a = (P 2 V 1 /nR). Along path a^,, P 
does not change, so that the work done by the gas in going along the 
whole of path la2 is AW a = P 2 (V 2 - V x ). Similarly, the work done by 
the gas in going along path lb2 is AW^ = P X (V 2 - V x ), differing from 
AW a by the factor P x . This same sort of argument can be used to 

23 



24 THERMODYNAMICS 

show that work done by the system, in consequence of a variation of 
any of the mechanical variables that describe its state, cannot be a 
state variable. 

Something more should be said about the meaning of the diagram 
of Fig. 4-1. In our calculations we tacitly assumed that at each point 
along each path the system was in an equilibrium state, for which the 
equation of state PV = nRT held. But for a system to be in equilib- 
rium, so that P and T have any meaning, it must be allowed to settle 
down, so that P and V (and therefore T) are assumed constant. How, 
then, can we talk about going along a path, about changing P and V, 
and at the same time assume that the system successively occupies 
equilibrium states as the change is made? Certainly if the change is 
made rapidly, sound waves and turbulence will be created and the 
equation of state will no longer hold. What we must do is to make the 
change slowly and in small steps, going from 1 to i and waiting till 
the system settles down, then going slowly to j, and so on. Only in 
the limit of many steps and slow change can we be sure that the sys- 
tem is never far from equilibrium and that the actual work done will 
approach the value computed from the equation of state. In thermody- 
namics we have to limit our calculations to such slow, stepwise 
changes (called quasistatic processes) in order to have our formulas 
hold during the change. This may seem to be an intolerable limitation 
on the kinds of processes thermodynamics can deal with; we shall see 
later that the limitation is not as severe as it may seem. 

Heat and Internal Energy 

If the only way to change the system's energy is to perform work 
on it or have it do work, then the picture would be simple. Not only dU 
but also dW would be a perfect differential; whatever work was per- 
formed on the system could eventually be recovered as mechanical (or 
electrical or magnetic) energy. This was the original theory of ther- 
modynamic systems; work was work and heat was heat. The introduc- 
tion of heat dQ served to raise the temperature of the body (indeed 
the rise in temperature was the usual way in which the heat added 
could be measured, as was pointed out at the beginning of Chapter 2), 
and when the body was brought back to its initial temperature it would 
have given up the same heat that had been given it earlier. We could 
thus talk about an internal energy of the system, which was the net 
balance of work done on or by the system, and we could talk about the 
heat possessed by the body, the net balance of heat intake and output, 
measured by the body's temperature. 

It was quite a shock to find that this model of matter in bulk was 
inconsistent with observation. A body's temperature could be changed 
by doing work on it; a body could take in heat (from a furnace, say) 
and produce mechanical work. It was realized that we cannot talk 



THE FIRST LAW OF THERMODYNAMICS 25 





1 


1 


1 >w 


b 




1 


1 > 








i 


J 






v 


V 






















a 

1 




2 



FIG. 4-1. Plots of quasistatic processes on the PV plane. 

about the heat "contained" by the system, nor about the mechanical 
energy it contains. It possesses just one pool of contained energy, 
which we call its internal energy U, contributed to by input of both 
mechanical work and also of heat, which can be withdrawn either as 
mechanical energy or as heat. Any change in U, dU is the difference 
between the heat added, d"Q, and the work done by the system tlW dur- 
ing a quasistatic process, 



dU = dQ - aw 

= dQ - P dV + J dL + H dM + /i dn + 



(4-1) 



where dU is a perfect differential and dQ and TXW are imperfect 
ones. Note the convention used here; dQ is the heat added to the sys- 
tem, "dW is the work done by the system. 

This set of equations is the first law of thermodynamics . It states 
that mechanical work and heat are two forms of energy and must be 
lumped together when we compute the change in internal energy of the 
system. It was not obvious to physicists of the early nineteenth cen- 
tury. To have experiments show that heat could be changed into work 
ad libitum, that neither dQ nor dW were perfect differentials, seemed 
at the time to introduce confusion into a previously simple, symmetric 
theory. 

There were some compensations. Gone was the troublesome ques- 
tion of how to measure the heat "contained" by the system. The ques- 
tion has no meaning; there is no state variable Q, there is only inter- 
nal energy U to measure. Also the amount of heat dQ added could 



26 THERMODYNAMICS 

sometimes be most easily measured by measuring dU and 3W and 
computing dQ = dU + dW. An accurate measurement of the amount of 
heat added is even now difficult to make in many cases (the heat pro- 
duced by passage of electric current through a resistance is relatively 
easy to measure, but the direct measurement of heat produced in a 
chemical reaction is still not easy). 

Of course, compensations or not, Eq. (4-1) was the one that corre- 
sponded with experiment, so it was the one to use, and people had to 
persuade themselves that the new theory was really more "simple" 
and * 'obvious " than the old one. By now this revision of simplicity has 
been achieved; the idea of heat as a separate substance appears to us 
"illogical." 

Just as with work, the total amount of heat added or withdrawn from 
a system depends on the process, on the path in the P,V plane of Fig. 
4-1, for example. Of course the process must be that slow, stepwise 
kind, called quasistatic, if we are to use our thermodynamic formulas 
to calculate its change. To go from 1 to a in Fig. 4-1 we must remove 
enough heat from the gas, keeping its volume constant meanwhile, to 
lower its temperature from T x = (P^/nR) to T a = (P 2 V 1 /nR). We 
could do this relatively quickly (but not quasistatically) by placing the 
gas in thermal contact with a constant- temperature heat source at 
temperature T a . Such a source, sometimes called a heat reservoir, 
is supposed to have such a large heat capacity that the amount of heat 
contributed by the gas will not change its temperature. In this case 
the gas would not be in thermal equilibrium until it settled down once 
more into equilibrium at T = T a . To carry out a quasistatic process, 
for which we could use our formulas to compute the heat added, we 
should have to place the gas first into contact with a heat reservoir at 
temperature T x - dT, allowing it to come to equilibrium, then place 
it in contact with a reservoir at T x - 2dT, and so on. 

To be sure, if the gas is a perfect gas of point atoms, we already 
know that U = (3/2)PV, so that U a - U x = -(3/2)V 1 (P 1 - P 2 ), whether 
the system passes through intermediate equilibrium states or not, as 
long as states 1 and a are equilibrium states. Then since in this case 
fiW = 0, we can immediately find dQ. But if we did not know the for- 
mula for U, but only knew the heat capacity of the gas at constant vol- 
ume, we should be required (conceptually) to limit the process of going 
from 1 to a to a quasistatic one, in order to use C v to compute the 
heat added. For a quasistatic process, for a perfect gas of point 
atoms where C v = (3/2)nR, the heat added to the gas between 1 and a is 

Qla = /c v dT = (|nR) / (V x dP/nR) = -| V 1 (P 1 - P 2 ) 

checking with the value calculated from the change in U. 



THE FIRST LAW OF THERMODYNAMICS 27 



Quasistatic Processes 

In going from a to 2 the same problem arises. We can imagine 
that the gas container is provided with a piston, which can be moved 
to change the volume V occupied by the gas. We could place the gas in 
thermal contact with a heat reservoir at temperature T 2 = (P 2 V 2 /nR) 
and also move the piston so the volume changes rapidly from Vj to V 2 
and then wait until the gas settles down to equilibrium. In this case we 
can be sure that the internal energy U will end up having the value 
(3/2)nRT 2 = (3/2)P 2 V 2 , but we cannot use thermodynamics to compute 
how much work was done during the process or how much heat was 
absorbed from the heat reservoir. If, for example, instead of moving 
a piston, we turned a stopcock and let the gas expand freely into a 
previously evacuated volume (V 2 - Vj, the gas would do no work while 
expanding. Whereas, if we moved the piston very slowly, useful work 
would be done and more heat would have to be taken from the reser- 
voir in order to end up with the same value of U at state 2. In the 
case of free expansion, the energy not given up as useful work would 
go into turbulence and sound energy, which would then degenerate into 
heat and less would be taken from the reservoir by the time the sys- 
tem settled down to state 2. 

If we did not know how U depends on P and T, but only knew the 
value of the heat capacity at constant pressure [which we shall show 
later equals (5/2)nR for a perfect gas of point atoms] we should have 
to devise a quasistatic process, going from a to 2, for which to com- 
pute AQ a 2 and AW a2 and thence, by Eq. (4-1), to obtain AU. For ex- 
ample, we can attach the piston to a device (such as a spring) which 
will maintain a constant pressure P 2 on the gas no matter what posi- 
tion the piston takes up (such a device could be called a constant- 
pressure work source, or a. work reservoir). We then place the gas in 
contact with a heat reservoir at temperature T a + dT, wait until the 
gas comes to equilibrium at slightly greater volume, place it in con- 
tact with another reservoir at temperature T a + 2dT, and so on. The 
work done in this quasistatic process at constant pressure is, as we 
said earlier, AW a 2 = P 2 (V 2 - Vj). The heat donated by the heat reser- 
voir [if C p = (5/2)nR] is AQ a2 = (5/2)nR / dT= (5/2)P 2 (V 2 - V x ) and 
the difference is AU= AQ a2 - AW a2 = (3/2)P 2 (V 2 - Vj, as it must be. 

Thus thermodynamic computations, using an appropriate quasi- 
static process, can predict the change in internal energy U (or in any 
other state variable) for any process, fast or slow, which begins and 
ends in an equilibrium state. But these calculations cannot predict the 
amount of intake of heat or the production of work during the process 
unless the process differs only slightly from the quasistatic one used 
in the calculations. It behooves us to avoid incomplete differentials, 
such as dW and dQ, and to express the thermodynamic changes in a 
in a system during a process in terms of state variables, which can 



28 THERMODYNAMICS 

be computed for any equilibrium state, no matter how the system ac- 
tually arrived at the state. 

Heat Capacities 

To integrate U for a simple system, where 

dU = dQ-PdV (4-2) 

we need to work out some relationships between the heat capacities 
and the partial derivatives of U. For example, if T and V are chosen 
to be the independent variables, the heat absorbed in a quasistatic 
process is 

dQ = dU + P dV = (8U/8T) V dT + [(aU/aV) T + P] dV (4-3) 

Since C v is defined as the heat absorbed per unit increase in T, 
when dV = 0, we see that 

C V =(3U/3T) V (4-4) 

so that Eq. (4-3) can be written 

8Q = C v dT + [(3U/9V) T + P] dV (4-5) 

If T and V are varied so that P remains constant, then when T 
changes by dT, V will change by (dV/dT)* dT and the amount of heat 
absorbed is 

dQ = C p dT = C v dT + [(3U/aV) T + P] (aV/3T) p dT or 

or 

C p = C v + (aV/3T) p [(3U/aV) T + P] (4-6) 

In our earlier discussion we stated that for a perfect gas of point 
atoms C v = (3/2)nR and C p - (5/2)nR; we can now justify our state- 
ments. From Eq. (3-3) we know that for such a gas U = (3/2)nRT, so 
Eq. (4-4) gives us C v immediately. It also shows that, for this gas 
(3U/3V) T = 0, so that, from Eq. (4-6), 

C v = |nR; C p = C v + (^)y = |nR (4-7) 

for a perfect gas of point atoms. 

A similar set of relationships can be derived for other pairs of 
mechanical variables. For example, for paramagnetic materials, the 
specific heats for constant SHI and for constant 3C are obtained from 
Eq. (4-1) (assuming that V, L, and n are constant): 



THE FIRST LAW OF THERMODYNAMICS 



29 



tfQ = (aU/aTjc^dT + [(3U/89TC) T - 3C] d9TC 
from which we can obtain 



911 



V9TV 



3C 



911 






5fC 



L 



(4-8) 



For a material obeying Curie's law 9TC = (nDac/T), it again turns out 
that (SU/39il)x = 0> analogous to the perfect gas, so that 



3C 



C w + (nD3C 2 /T 2 ) = C cw ,+ (l/nD)9U 2 



9H 



(4-9) 
but since we are 



Strictly speaking, Cg^ should be written Cyc^L 

usually concerned with one pair of variables at a time, no ambiguity 

arises if we omit all but the variables of immediate interest. 



Isothermal and Adiabatic Processes 

Other quasistatic processes can be devised beside those at constant 
volume and at constant pressure. For example, the system may be 
placed in thermal contact with a heat reservoir and the mechanical 
variables may be varied slowly enough so that the temperature of the 
system remains constant during the process. This is called an iso- 
thermal process. A heat capacity for this process does not exist (for- 
mally speaking, C^ is infinite). However it is important to be able to 
calculate the relationship between the heat dQ absorbed from the res- 
ervoir and the work tlW done by the system while it proceeds. 

For the perfect gas, where (3U/3V)x - 0, and for paramagnetic 
materials, where (3U/39tc)t = 0, and for other systems where U turns 
out to be a function of T alone, the heat absorbed from the reservoir 
during the isothermal process exactly equals the work done by the sys- 
tem. Such systems are perfect isothermal energy transformers, 
changing work into heat or vice versa without holding out any of it 
along the way. The transformation cannot continue indefinitely, how- 
ever, for physical limits of volume or elastic breakdown or magnetic 
saturation or the like will intervene. 

For less -simple substances the heat absorbed in an element of an 
isothermal process is 



dQ = dU + flW = 



.3 V /Tarn: 



+ P 



dv + 



3U\ 

.3 911/ rpy 



- H 



dan 



+ E 

i 



©TV...'' 1 - 1 . 



dn A + 



(4-10) 



differing from the work done by the amount by which U increases as 



30 THERMODYNAMICS 

V or M or nj is changed isothermally. We remind ourselves that 
jLLidn^ is the chemical energy introduced into the system when dni 
moles of substance i is introduced or created in the system, and thus 
that -iii dn^ is the chemical analogue of work done. 

Another quasistatic process can be carried out with the system iso- 
lated thermally, so that dQ is zero. This is called an adiabatic proc- 
ess; for it the heat capacity of the system is zero. The relationship 
between the variables can be obtained from Eq. (4-1) by setting 
d"Q = 0. For example, for a system with V and T as independent va- 
riables, using Eqs. (4-5) and (4-6), the change of T with V in an 
adiabatic process is 

<=?«-&i^" - (8). -c-i )(H) P 

(4-11) 

where y - (C p /C v ) is a state variable. The reason for using the sub- 
script s to denote an adiabatic process will be elucidated in Chapter 6 
We see that when y is constant (as it is for a perfect gas) the adia- 
batic change of T with V is proportional to the change of T with V 
at constant pressure. 

For the perfect gas of point atoms, where C p = (5/2)nR and 
C v = (3/2)nR, y = 5/3 and (aT/3V) p = (P/nR) = (T/V), the relation be- 
tween T and V for an adiabatic expansion is 

(dT/T) + (y - l)(dV/V) = or TV r " 1 = (PV>7nR) = const. 

(4-12) 

Compressing a gas adiabatically increases its temperature, because 
y > 1 pressure increases more rapidly, with change of volume, in an 
adiabatic compression than in an isothermal compression, where 
(PV/nR) is constant. 

Similarly, for a paramagnetic material that obeys Curie's law and 
happens to have Cg^ independent of T and am, the relation between T 
and 9TC during adiabatic magnetization is [see Eqs. (4-8) and (4-9)] 

C gTl dT=Hd3TC or C^dT = — — d3tt or 

T = T exp (9n 2 /2nDC 3Tl ) (4- 13) 

When 371 = 0, the atomic magnets, responsible for the paramagnetic 
properties of the material, are rotating at random with thermal mo- 
tion; impressing a magnetic field on the material tends to line up the 
magnets and reduce their thermal motion and so to * 'squeeze out" 



THE FIRST LAW OF THERMODYNAMICS 31 



their heat energy, which must go into translational energy of the at- 
oms (increased temperature) since heat is not removed in an adiabatic 
process. Reciprocally, if a paramagnetic material is magnetized, 
brought down to as low a temperature as possible and then demagne- 
tized, the material's temperature will be still further reduced. By 
this process of adiabatic demagnetization, paramagnetic materials 
have been cooled from about 1°K to less than 0.01 °K, the closest 
to absolute zero that has been attained. 




The Second Law of 

Thermodynamics 

Once it had been demonstrated that heat is a form of energy, the 
proof of the first law of thermodynamics became merged with the 
proof of the conservation of energy. The experimental authentication 
of the second law is less direct; in a sense it is evidenced by the suc- 
cess of thermodynamics as a whole. The enunciation of the second law 
is also roundabout; its various paraphrases are many and, at first 
sight, unconnected logically. We could introduce the subject by asking 
whether there exists an integration factor for the imperfect differen- 
tial dQ, or by asking for a quantitative measure of the difference be- 
tween the quasistatic process leading from 1 to a in Fig. 4-1 and the 
more rapid process of placing the gas immediately in contact with a 
heat reservoir at temperature T a , or else by asking whether 100 per 
cent of the heat withdrawn from a heat reservoir can be converted into 
mechanical work, no matter how much is withdrawn. We shall start 
with the last question and we find that in answering it we answer the 
other two. 

Heat Engines 

Chemical or nuclear combustion can provide a rough equivalent of 
a constant temperature heat reservoir; as heat is withdrawn more can 
be provided by burning more fuel. Can we arrange it so this continu- 
ous output of heat energy is all continuously converted into mechanical 
work? The second law of thermodynamics answers this question in 
the negative, and provides a method of computing the maximum frac- 
tion of the heat output which can be changed into work in various cir- 
cumstances. 

At first sight this appears to contradict a result obtained in Chap- 
ter 4. There it was pointed out that a system, with internal energy 
that is a function of temperature only (such as a perfect gas or a per- 
fect paramagnetic material), when placed in contact with a constant- 
temperature heat source, can isothermally transform all the heat it 
withdraws from the reservoir into useful work, either mechanical or 

32 



THE SECOND LAW OF THERMODYNAMICS 



33 



electromagnetic. The trouble with such a process is that it cannot 
continue to do this indefinitely. Sooner or later the pressure gets too 
low or the tension gets greater than the elastic limit or the magnetic 
material becomes saturated, and the transformer's efficiency drops 
to zero. What is needed is a heat engine, a thermodynamic system 
that can operate cyclically, renewing its properties periodically, so it 
can continue to transform heat into work indefinitely. 

Such an engine cannot be built to run entirely at one temperature, 
that of the heat source. If it did so the process would be entirely iso- 
thermal, and if we try to make an isothermal process cyclic by revers- 
ing its motion (compressing the gas again, for example) we find we are 
taking back all the work that has been done and reconverting it into 
heat; returning to the start leaves us with no net work done and all the 
heat given back to the reservoir. Our cycle, to result in net work done 
and thus net heat withdrawn, must have some part of it operating at a 
lower temperature than that of the source. And thus we are led to the 
class of cyclical operations called Carnot cycles. 



Carnot Cycles 

A Carnot cycle operates between two temperatures, a hotter, T^, 
that of the heat source, and a colder, T c , that of the heat sink. Any 
sort of material can be used, not just one having U a function of T 
only. And any pair of mechanical variables can be involved, P and V 
or J and L or 3C and arc (we shall use P and V just to make the dis- 
cussion specific). The cycle consists of four quasistatic operations: 
an isothermal expansion from 1 to 2 (see Fig. 5-1) at temperature T^, 




FIG. 5-1. Example of a Carnot cycle, plotted in the PV 
plane. 



34 THERMODYNAMICS 

withdrawing heat aQ 12 from the source and doing work aW 12 (not nec- 
essarily equal to AQ 12 ); an adiabatic expansion from 2 to 3, doing fur- 
ther work aW^ but with no change in heat, and ending up at tempera- 
ture T c ; an isothermal compression at T c from 3 to 4 requiring work 
-AW 34 = AW 43 to be done on the system and contributing heat -AQ 34 
= AQ 43 to the heat sink at temperature T c , ending at state 4, so placed 
that process 4 to 1 can be an adiabatic compression, requiring work 
-AW 41 = AW 14 (aQ 41 = 0) to be done on the system to bring it back to 
state 1, ready for another cycle (Fig. 5-1). This is a specialized sort 
of cycle but it is a natural one to study and one that in principle should 
be fairly efficient. Since the assumed heat source is at constant tem- 
perature, part of the cycle had better be isothermal, and if we must 
"dump" heat at a lower temperature, we might as well give it all to 
the lowest temperature reservoir we can find. The changes in temper- 
ature should thus be done adiabatically. 

This cycle, of course, does not convert all the heat withdrawn from 
the reservoir at T^ into work; some of it is dumped as unused heat 
into the sink at T c . The net work done by the engine per cycle is the 
area inside the figure 1234 in Fig. 5-1, which is equal to AW 12 + AW 23 
+ AW 34 + AW 41 = AW^ + AW 23 - AW 43 - AW 14 and which, according to 
the first law, is equal to AQ 12 + AQ 34 = AQ 12 - AQ 43 . The efficiency 77 
with which the heat withdrawn from the source at Tjj is converted into 
work is equal to the ratio between the work produced and the heat with- 
drawn. 

_ AW 12 + AW 23 - AW 43 - AW 14 _ AQ^ - AQ 43 _ AQ 43 

71 AQ 12 AQ 12 AQ 12 V-. 1 ' 

We note that, since all the operations are quasistatic, the cycle is re- 
versible; it can be run backward, withdrawing heat AQ 43 from the 
reservoir at temperature T c and depositing heat AQ 12 in the reser- 
voir at Tfo, requiring work AQ 12 - AQ 43 to make it go. 

There are a large number of Carnot cycles, all operating between 
Tfo and T c ; ones using P and V to generate work, involving different 
substances with different equations of state; ones using ac and arc to 
produce magnetic energy, using different paramagnetic substances; 
and so on. One way of stating the second law is to say that all Carnot 
cycles operating between the temperatures T n and T c have the same 
efficiency. Another way is to say that no engine y or combination of en- 
gines, operating between a maximum temperature T^ and a minimum 
temperature T c can be more efficient than any Carnot cycle operating 
between these temperatures. 

Statements of the Second Law 

To show that these statements are equivalent we shall show that if 
we could find a cycle of greater efficiency than a Carnot cycle, oper- 



THE SECOND LAW OF THERMODYNAMICS 



35 



ating between Th and T c , we can combine them to obtain a perfectly 
efficient engine, thus more efficient than either. Figure 5-2 shows 
how this can be done. We assume that the less efficient one is the 



assumed- 
more-efficient 
engine 



AQ 



AQ 12 



standard 
Carnot 
engine 




FIG. 5-2. Carnot engine (reversed) driven by an engine 
assumed more efficient; the combination would 
make a perfect engine, which is impossible. 



"standard" one described in Eq, (5-1). We shall run this backward, 
requiring net work AQ 12 - AQ 43 to take heat AQ 43 from the lower tem- 
perature and depositing heat AQ 12 at the upper. We adjust the pre- 
sumed better engine so its exhaust AQ" at T c is equal to AQ 43 , the 
same amount that the first engine withdraws. The amount of heat AQ' 
it withdraws at T^ must be larger than AQ^ if its efficiency 
1 - (AQ"/AQ') is to be larger than the value 1 - (AQ 43 /AQ 12 ) for the 
"standard" engine. We now use this better engine to run the "stand- 
ard" one in its reversed cycle. Actually there will be work left over, 
an amount (AQ' - AQ 43 ) - (AQ 12 - AQ 43 ) = AQ' - AQ 12 , which can be 
used as we please. The combined engine thus withdraws a net heat 
AQ' - AQ 12 from the upper heat reservoir, dumps no net heat into the 
lower, and produces net work AQ' - AQ 12 ; it is a perfect engine. Thus 
a contradiction of the first statement above leads to a contradiction of 
the second statement. If there can be no engine more efficient than a 
Carnot cycle, then all Carnot cycles (between T^ and T c ) must have 
the same efficiency. 

We can see now that still another way of stating the second law is 
as follows: It is impossible to convert, continuously, heat from a res- 
ervoir at one temperature T^ into work, without at the same time 
transferring additional heat from T^ to a colder temperature T c . 
This way of stating it is called Kelvin's principle. Still another way 



36 THERMODYNAMICS 

is to state that it is impossible to transfer, in a continuous manner, 
heat from a lower temperature reservoir to one at higher temperature 
without at the same time doing work to effect the transfer. This is 
called Clausius' principle. Its equivalence to the other three state- 
ments can be demonstrated by manipulating the combination of Fig. 
5-2; by running it backward and adjusting the assumed better engine 
so that AQ' - AQ" = AQ 12 - AQ 43 and thus AQ"< AQ 43 and AQ'>AQ 12 , 
for example. 

The Thermodynamic Temperature Scale 

If all Carnot cycles operating between the same pair of tempera- 
tures, Th and T c , have the same efficiency, this efficiency must be 
simply a function of T^ and T c : 

V = l- *(T h ,T c ); * (T h ,T c ) = (AQ 43 /AQ 12 ) (5-2) 

The ratio of the heat dumped to that withdrawn must be the same for 
all these cycles. To find how ^ depends on the temperatures T^ and 
T c , as measured by a thermometer, we break up a Carnot cycle into 
two cycles, each using the same material, as shown in Fig. 5-3. The 
upper one takes heat AQ 12 from the upper reservoir at a temperature 
6ft (on the scale of the thermometer we are using) does work AQ^ - 
AQ 65 , and delivers heat AQ 65 to an intermediate reservoir at a meas- 
ured temperature 9 m . This reservoir immediately passes on this heat 
AQ 65 to the second engine, which produces work aQ 65 - AQ 43 and de- 
livers heat AQ 43 to a reservoir at temperature 9 C as measured on 
our thermometer. The combination, which produces a total work of 
AQ 12 - AQ 43 , is thus completely equivalent to a single Carnot cycle, 
using the same material and operating between #h and 9 C on our 
scale, withdrawing AQ 12 from the upper, exhausting AQ 43 at the lower, 
and doing work AQ 12 - AQ 43 . Therefore, according to Eq. (5-2), the 
efficiencies % and r\\ of the two component cycles and the efficiency 
?7 C of the combination, considered as a single engine, are related as 
follows : 

T? u =l-*(0 h >*m>; *^h,0m) = Hf 

l-^c = * ( ^ 6 c) = ^ = ^h,U*Mc) (5-3) 

For the equation relating the three values of the function ^, for 
the three pairs of values of measured temperature 9, to be valid, * 



THE SECOND LAW OF THERMODYNAMICS 



37 





FIG. 5-3. Arrangement of two Carnot cycles so their com- 
bined effect is equivalent to one cycle between 
the temperature extremes. 

must have the functional form *(x,y) = [T(y)/T(x)] , where T(0) is 
some single -valued, monotonically increasing function of 0, the tem- 
perature reading on the thermometer used. For then ^(# m ,# c )*(£h^m) 
will equal 



38 THERMODYNAMICS 

[T(0 c )/T(0 m )][T(0 m )/T(0 h )] =[T(9 C )/T(9 h )] = *(9 h ,0 c ) 
Therefore, 

r'ia a\ AQ * 3 T( ^ c) AQ « AQ * 2 /c „x 

* ( ^ c)= AQ^ = T5J5 ° r T^W (5 " 4) 

We can thus experimentally determine the function T(d) by using 
various Carnot cycles, all having the same upper temperature, read 
as dfr on our thermometer, but each going to a different lower tem- 
perature, discarding a different amount of heat and thus doing differ- 
ent amounts of work. By measuring the common value of AQ 12 and 
the different values of the heat discarded, AQ 43 , we can compute T(9 C ) 
as (aQ 43 /AQ 12 ) times the common value T(9\ i h If, for example, a cy- 
cle with lower reading 9^ has its discarded heat just 1/2 of AQ^, 
then T(0 d ) = (l/2)T(0 h ). 

The numerical values of 9& and all the # c 's were obtained from 
the arbitrary scale of the particular thermometer used. It would seem 
more appropriate to use T(#) itself, rather than 9, for a temperature 
scale. We can use, for our upper reservoir, the temperature of melt- 
ing ice, and set T(# h ) = 273°. The value of T(# c ) for any colder tem- 
perature (such as the boiling point of oxygen, for example) is then 
(AQ 43 /AQ 12 )x 273°, where AQ 12 and AQ 43 are the heats involved in the 
Carnot cycle operating between the melting point of ice and the boiling 
point of oxygen. Such a scale of temperature, as determined by meas- 
ured heat ratios for Carnot cycles, is called the thermodynamic scale, 
and measurements given in this scale are in degrees Kelvin. 

This now completes our series of definitions of temperature started 
in Chapter 1. From now on temperature T will always be measured 
in degrees Kelvin. In its terms, the efficiency of a Carnot cycle oper- 
ating between Th and T c (both measured in degrees Kelvin) is 

AQ a, T r 
1} = 1 - *(T h ,T c ); *(T h ,T c ) = -^2.= -^ (5-5) 

This is the maximum efficiency we can get from an engine that oper- 
ates between Th and T c . 

Thus the second law is a sort of relativistic principle. The minimal 
temperature at which we can exhaust heat is determined by the tem- 
perature of our surroundings, and this limits the efficiency of transfer 
of heat into work. Heat at temperatures high compared to our sur- 
roundings is " high- quality" heat; if we handle it properly a large por- 
tion of it can be changed into useful work. Heat at temperature twice 
that of our surroundings (on the Kelvin scale) is already half degraded; 
only half of it can be usefully employed. And heat at the temperature of 
our surroundings is useless to us for getting work done. Even heat at 



THE SECOND LAW OF THERMODYNAMICS 



39 



a million degrees Kelvin would be useless if the whole universe were 
at this same temperature. Temperature differences enable us to pro- 
duce mechanical energy, not absolute magnitudes of average temper- 
ature. 




FIG. 5-4. Reversible cycle (heavy line) simulated by a 
combination of several Carnot cycles. 



In principle we can build up a combination of Carnot cycles to sim- 
ulate any kind of reversible cycle, such as the one shown by the heavy 
line in Fig. 5-4. In such a cycle, heat is taken on and given off at dif- 
ferent temperatures, none of the elementary processes being isother- 
mal or adiabatic. The maximum temperature reached is T^, for the 
isothermal curve tangent to the top of the loop, and the minimum is 
T c , for the lower tangent isothermal. The work produced is the area 
within the heavy line. This cycle is crudely approximated by the five 
Carnot cycles shown, with their isothermals and adiabatics as light 
lines; a better approximation could be obtained with a large number of 
Carnot cycles. The efficiency of subcycle 3 is greatest, because it 
operates between the greatest spread of temperatures; the others have 
less efficiency. Thus any cycle that takes in or gives off heat while 
the temperature is changing is not as efficient as a Carnot cycle op- 
erating between the same maximum and minimum temperatures, i.e., 
which takes on all its heat at T^ and which gives up all its heat at T c . 




Entropy 



We notice that, for a Carnot cycle, the relationship between each 
element dQ of heat taken on and the thermodynamic temperature T at 
which it is taken on (or given off) is such that the integral of dQ/T 
taken completely around the cycle is zero. The heat taken on at Th is 
AQ 12 and the heat "taken on" at T c is the negative quantity AQ 34 
= -AQ 43 ; Eq. (5-5) states that the sum (AQ 12 /T h ) - (aQ 43 /T c ) = 0. 

A Thermal- State Variable 

Since any quasistatic, reversible cycle can be considered as a sum 
of Carnot cycles, as in Fig. 5-4, we see that for any such cycle the in- 
tegral of the quantity £TQ/T around the whole cycle is zero. But for 
any thermodynamic state function Z(x,y) (as in Fig. 6-1) the integral 
of the perfect differential dZ around a closed path (such as ABA in 
Fig. 6-1) is zero, as long as all parts of the path are reversible proc- 
esses; alternately any differential that integrates to zero around any 
closed path is a perfect differential and its integral is a state function 
of the variables x,y. 

Therefore the quantity dS = dQ/T is a perfect differential, where 
dQ is the heat given to the system in an elementary, reversible proc- 
ess and T is the thermodynamic temperature of the system during 
the process. The integral of this perfect differential, S(x,y), is a state 
variable and is called the entropy of the system. It is an extensive var- 
iable, proportional to n. 

This result;, which is still another way of stating the second law, 
can be rephrased to answer the first of the questions posed in the first 
paragraph of Chapter 5. There is an integrating factor for dQ, if heat 
dQ is absorbed in a reversible process; it is the reciprocal of the 
thermodynamic temperature, defined in Eq. (5-5). The resulting per- 
fect differential dQ/T measures the change dS in the state variable 
S, the entropy; and the difference &> - S x of entropy between equilib- 
rium states 1 and 2 i s computed by integrating dQ/T along any re- 

40 



ENTROPY 



41 




FIG. 6-1. Paths in the xy plane for reversible processes 
(solid lines) and for spontaneous processes 
(dashed lines). 



versible path between 1 and 2. On the other hand, there is no integrat- 
ing factor for tlQ for an irreversible process. 

The entropy S is the extensive variable that pairs with T as V 
does with P and sm with oc. The heat taken on by the system in a re- 
versible process is dQ = T dS, just as the work done by the system is 
dW = P dV or -ocdsni. Thus the equation that represents both the first 
and second laws of thermodynamics is 



dU = T dS - P dV + J dL + 5C dm + £ jbtj dn t + 

l 



(6-1) 



for a reversible process. This equation, plus the empirical heat ex- 
pressions for heat capacity and the equations of state, constitutes the 
mathematical model for thermodynamics, by means of which we can 
predict the thermal behavior of substance in bulk. 

Basic equation (6-1) can be integrated to obtain U, once we know 
the dependence of T, P, J, etc., on S, V, L, etc., which we can take 
to be the independent variables of the system. Of course we can, if we 
choose, use P instead of V as one of the independent variables but, if 
we desire to calculate U, Eq. (6-1) shows that the extensive variables 
S, V, L, etc., are the "natural" ones to use. When expressed as func- 
tion of the extensive variables, U has the properties of a potential 
function, for its partial with respect to one of the extensive variables 
(V, for example) is equal to the corresponding thermodynamic "force," 
the related intensive function (P, for example). Thus 



42 THERMODYNAMICS 

T "(i) v „... ; P "~(«) T ,.. ; *"(») TV .J 

Because Eq. (6-1) is a sum of intensive variables, each multiplied 
by the differential of its corresponding extensive variable, we can ap- 
ply a trick devised by Euler to calculate U. Let the v extensive vari- 
ables appropriate for a system be X 1 ... ,Xj,; then the corresponding 
intensive variables are Yj = dU/3Xj , all of which are uniform through- 
out the system at equilibrium. Now we increase the amount of material 
by a factor A, keeping all the intensive variables Yj constant during the 
change. The internal energy U for the new system is just A times the 
U of the original system, and the extensive variables are also in- 
creased by A, so that 

u(ax 1? ..., \x u ) = \u(x 1 ,.:.,x l/ ) 

Differentiating with respect to A on both sides and using the defini- 
tions of the intensive variables as partials of U, we have 

£-\J(\X 1 ,...,\X u )= £ (dU/SXjJXj = SY j X j = U(X 1 ,...,X i/ ) 
j=l j=l 

Thus, in terms of our familiar variables, 

U(S,V,L,3fTl,n 1 ,n 2 , ...) = ST - PV + JL + 3C9TI + £ ji^i + ••• (6-3) 

i 

This is Euler' s equation, which will be of use later. It may be con- 
sidered to be the basic equation of thermodynamics; all the rest may 
be derived from it. 

Reversible Processes 

We are now in a position to be more specific about the adjective 
reversible, which we first used for a cycle (such as a Carnot cycle) 
and which we recently have been applying to processes. To see what 
it means let us first consider a few irreversible processes. Suppose 
a gas is confined at pressure P within a volume V of a thermally 
insulated enclosure, as shown in Fig. 6-2. The gas is confined to V 
by a diaphragm D; the rest of the volume, V x - V , is evacuated. We 
then break the diaphragm and let the gas undergo free expansion until 
it comes to a new equilibrium at volume V r This is a spontaneous 
process, going automatically in one direction only. It is obviously ir- 
reversible; the gas would never return by itself to volume V . 

Next suppose we place an object, originally at temperature T^, in 
thermal contact with a heat reservoir at temperature T c , less than 



ENTROPY 



43 




FIG. 6-2. The Joule experiment. 



Th- Here again the process is spontaneous; heat flows from the object 
until it comes to equilibrium at temperature T c . This also is an ir- 
reversible process; it would take work (or heat from a reservoir at 
Tjj) to warm the body up again. 

We can thus define the adjective "reversible" in a negative way; a 
reversible process is one that has no irreversible portion. To expand 
the gas from V to V x reversibly we could replace the diaphragm by 
a piston and move it slowly to the right. During the motion, as the vol- 
ume is increased by dV, the gas is never far from equilibrium, and a 
reversal of motion of the piston (so the volume decreases again by dV) 
would bring the gas back to its earlier state. In such a case we could 
retrace every part of the process in detail. Every reversible process 
is quasistatic; not all quasistatic processes are reversible. 

In Eq. (6-1) we pointed out that the integral of cfQ/T around a re- 
versible cycle is zero. If the cycle is irreversible the integral differs 
from zero; the second law requires it to be less than zero. For ex- 
ample, suppose the irreversible cycle took in all its heat, an amount 
AQ' at T^ and exhausted an amount AQ" all at T c . The efficiency of 
such a cycle would have to be less than the value 1 - (T c /Th) = 1 - 
(aQ 43 /aQ 12 ) for a Carnot cycle between the same temperatures. This 
means that AQ' would have to be smaller than AQ 12 or AQ" would have 
to be larger than AQ 43 , or both, so that the integral of <TQ/T around 
the irreversible cycle would turn out less than that for the Carnot cy- 
cle, i.e., less than zero. The argument can be generalized for all 
closed cycles. Thus another way of stating the second law is that for 
all closed cycles the integral around the cycle, 



f (dQ/T) < 



(6-4) 



where the equality holds for reversible cycles and the inequality is 
for irreversible ones. Since dS is measured by the value of dQ/T 
when the process is reversible, we also have that 



dS > dQ/T 



(6-5) 



44 THERMODYNAMICS 

where again the equality holds for reversible processes, inequality 
for irreversible ones. 

Irreversible Processes 

For example, suppose the dotted line from C at D in Fig. 6-1 rep- 
resents a spontaneous process, during which no heat is absorbed or 
given up (as is the case with the free expansion of the gas in Fig. 6-2), 
so that / (dQ/T) = for the dotted line. Since the process is irrevers- 
ible, / dS = Sd - Sq must be larger than / (ffQ/T) = 0. In other words, 
during a spontaneous process taking place in a thermally isolated sys- 
tem, the entropy always increases. 

The statement is not so simple when the spontaneous process in- 
volves transfer of heat, as with the irreversible cooling of a body from 
Th to T c mentioned above [see also the discussion four paragraphs 
below Eq. (4-1)]. In this case the body loses entropy and the reservoir 
gains it. If the heat capacity of the body is C v , a constant, the heat 
reservoir gains C v (Th - T c ) and, since the reservoir is always at T c , 
its gain in entropy is C v [(Th - T c )/T c ]. The loss in entropy of the 
body is not much harder to compute. During its spontaneous discharge 
of heat to the reservoir, and before it comes to uniform temperature 
T c , the body is not in equilibrium, so dS does not equal dQ/T. How- 
ever we can devise a quasistatic process, placing a poor heat conduc- 
tor between the body and the reservoir, so the heat flows into the res- 
ervoir slowly and the body, at any time, will have a nearly uniform 
temperature T, where T starts at T^ and gradually drops to T c at 
the end. The loss of entropy from the body is thus the integral of 
C v (dT/T), which is C v ln(Th/T c ) if C v is constant. This is always 
smaller than C v [(Th - T c )/T c ] , the entropy gained by the reservoir, 
although the two approach each other in value as T^ approaches T c . 
Thus, although the entropy of the body decreases during the sponta- 
neous cooling of the body, the total entropy of body and reservoir 
(which we might call the entropy of the universe) increases by the 
amount 

S = C v x- C v ln(l+x) =C v (|x 2 -| X 3 + -ix 4 - -J 

where x = [(T^ - T c )/T c ] , which is positive for all values of x > -1. 

Thus the statement at the end of the previous paragraph can be 
generalized by saying that in a spontaneous process of any kind, even 
if the entropy of the body decreases, the entropy of some other system 
increases even more, so that the entropy of the universe always in- 
creases during an irreversible process. This, finally, is the answer 
to the second question at the beginning of Chapter 5; the measure of 
the difference between a reversible and an irreversible process lies 
in the entropy change of the universe. 



ENTROPY 45 

Entropy is a measure of the unavailability of heat energy. The en- 
tropy of a certain amount of heat at low temperature is greater than it 
is at high temperature, loosely speaking. Alternately, entropy meas- 
ures the degree of disorganization of the system. Irreversible proc- 
esses increase disorder, increase the amount of low -temperature 
heat, and thus increase the entropy of the universe. Reversible proc- 
esses, on the other hand, simply transfer entropy from one body to 
another, keeping the entropy of the universe constant. A few examples 
will familiarize us with these ideas. 

Entropy of a Perfect Gas 

The entropy of n moles of a perfect gas of point atoms, for which 
U = (3/2)nRT and PV = nRT, may be determined by integration of Eq. 
(6-1), which is for this case 

T dS = dU + P dV or dS = |(nR/T) dT + (nR/V) dV 

SO 

S - nR In [(T/T ) 3 / 2 (V/V )] + S 

where S = S when T = T and V = V . Increase in either T or V in- 
creases the entropy of the gas. Instead of T and V (and n), S and V 
(and n) can be used as independent variables, in which case 

T = To $f e2(S-S )/3nR; p . ^(^f .!» -*>/>«* 

These formulas immediately provide us with the dependence of T and 
P on V for an adiabatic process. For a reversible adiabatic process, 
dS = CTQ/T = 0, so S is constant, which is why partials for such proc- 
esses are labeled with subscript S. 

We can also use Euler's equation (6-3) to calculate the chemical 
potential per mole of point atoms. For this system, U = TS - PV + /in, 
the atoms being all of one kind. Inserting the expressions for U, PV, 
and TS in terms of T, V, and n and dividing by n, we find that 

M - -Ts + RT InLe 5 / 2 (V /V)(T /T) 3 / 2 ] (6-6) 

where e is the base of the natural logarithms (In e = 1) and s 

= (So/n,,) is the entropy per mole at T , V , and n . Therefore we can 

use /x for an independent variable and obtain, for example, 



V = V (T /T)3A exp (- | - \ - £) 



46 THERMODYNAMICS 

The function U(S,V,n) is (nb /V 2/s ) e 2s/3nR , where constant b equals 
T (V /n ) 2/3 e~ 2s o/ 3R , independent of S, V, and n. 

The Joule Experiment 

Let us return, for a page or two, to the irreversible, free- expansion 
process pictured in Fig. 6-2. Initially the gas is confined to volume V 
and is at temperature T ; after the diaphragm has broken, the gas ex- 
pands spontaneously and eventually settles down to equilibrium in vol- 
ume V x at a new temperature T r We should like to be able to compute 
T x and also to calculate the increase in entropy during the process for 
any sort of gas, not just for a perfect gas. What we need to know to do 
this is the nature of the final equilibrium state; we then can use the ap- 
propriate version of Eq. (6-1), 

dU= TdS - P dV (6-7) 

to integrate from initial to final state via a reversible path. 

Equation (6-1), in its various forms, can represent any reversible 
process, therefore can be used to compute the difference of value of 
any state variable between any initial and final states. In the free- 
expansion case now under study, the actual process is far from re- 
versible, but it starts and stops with equilibrium states and we can 
compute the difference between start and finish without bothering to 
learn what the system did in-between. 

In the free- expansion case, since the system is thermally insulated 
during the process and no work is done, the internal energy must be 
the same at the finish as it was at the start. Thus the change of value 
of any state variable caused by free expansion may be computed by 
integrating its rate of change with respect to V, at constant U. How 
this partial may be found, by using Eqs. (3-6) and (3-7) to manipulate 
Eq. (6-7), goes as follows. 

First we express dS and dU in terms of their partials and the dif- 
ferentials of the independent variables T and V and equate coeffi- 
cients of these differentials: 



5), «♦*{&)>- ($,-[(8) 



SO 



/as_\ = l/au\ = £v. (dS\ = l|"f9U 
laT/_ T\3T/ V T ' \8V/ T tLW 



+ P 
T 



+ P 
T 



dV 



(6-8) 



Now apply the highly useful equation (3-7), on these partials of S, 



ENTROPY 47 



TL8V\3T/ V J T tL3T\8V/ t J v T\3T/ v T 2 l\dV 1 



+ P 
T 



or 



iGSHffL ♦*-*(»), 



We also see that 

OC v /8V) T = T 



3T 



(!I) T ] =T0 2 P/8T 2 ) V (6-10) 



for any substance having only V and T as variables. 

From these relationships we can compute the change of T with V 
at constant U, by using Eq. (4-4) as well as Eq. (3-6) again: 

This can be integrated if we know the empirical formulas for C v and 
for P in terms of T and V. For example, for a perfect gas, 
P = nRT/V, so P - T(3P/8T) V = and therefore (dT/dV)\j = 0. A 
perfect gas comes to equilibrium, after a free expansion in an insu- 
lated container, with no change in temperature. Joule first proposed 
and carried out measurements on such processes, to see how closely 
actual gases behave like perfect gases. For monatomic gases the 
state function n(3T/8V)u (which is called the Joule coefficient) is less 
than 0.001°K. moles per m 3 . 

For a gas satisfying the Van der Waals equation (3-2), having a heat 
capacity C v that is independent of V, 



- -(£). 



an" 



so 



an' 



\dVJu V 2 C V (T) 

which is small because a is small for most gases. This means that 
we can consider C v constant over the limited range of temperature 
involved, in which case the total temperature change caused by free 
expansion is 



48 THERMODYNAMICS 

'■-v-sftH) 

which gives the small change in T during the expansion at constant U 
for a Van der Waals gas. Since V x > V the temperature drops during 
the process, although the drop is small. During the expansion a small 
amount of the molecular kinetic energy must be lost in doing work 
against the small attractive forces between the molecules. 

Entropy of a Gas 

To compute the change of S with V at constant U for any sub- 
stance we need only note that the equation T dS = dU + P dV gives us 
directly 

0S/9V)xj=P/T (6-13) 

which, for a perfect gas, results in S x - S = nR ln^/V,,) [which could 
have been obtained from Eq. (6-5)] . For a Van der Waals gas, 

(dS\ nR an 2 

VaV/u V-nb V 2 T(V) 

which can be integrated after we have substituted for T from Eq. 
(6-11) (change T l9 V l into T, V and solve for T as a function of V, 
V , T , then substitute this for the T in the term an 2 /V 2 T and inte- 
grate). However, as we found from Eq. (6-12), T changes but little 
during the free expansion, so little error is made by setting T = T 
in the small term (an 2 /V 2 T). Thus a good approximation to the change 
in entropy of a Van der Waals gas during change from V to V x at 
constant U is 

*-*— *(I£S) -$«-*) 

which should be compared with the entropy change for the perfect gas. 
The entropy increases in both cases, as it must for a spontaneous 
process of this sort. Since the system is insulated during the process, 
the change represents an increase in entropy of the universe. 

If we were to go from state to state 1 by a reversible process, 
the increase in entropy of the gas would be the same as the value we 
have just calculated. But, to offset this, some other system would lose 
an equal amount, so the entropy change of the universe would be zero. 
For example, a reversible change from V , T to V lf T for a perfect 
gas would be an isothermal expansion, replacing the diaphragm by a 
piston and moving the piston slowly from V to V,. The gas would do 
work W = nRT ln^/Vo) and would gain entropy [see Eq. (6-9)] 



ENTROPY 



49 



AS = fj 1 OS/8V) T dV = nR In (V^Vg) 



For 



T at- 



which is the same as that gained during the irreversible process 
the isothermal process, however, we have a heat reservoir at 
tached, which loses the amount of entropy that the gas gains, so the 
change in entropy in the universe is zero in this case. 

Entropy of Mixing 

As a final example, illustrating the connection between entropy and 
disorder, we shall demonstrate that mixing two gases increases their 
combined entropy. As shown in Fig. 6-3, we start with an moles of a 
gas of type 1 (such as helium) in a volume aV at temperature T and 




FIG. 6-3. Arrangement to illustrate spontaneous mixing of 
two different gases. 

pressure P = cmRT /aV on one side of a diaphragm and (1 - a)n 
moles of gas of type 2 (such as nitrogen) in volume (1 - ot)V on the 
other side of the diaphragm, in equilibrium at the same temperature 
and pressure. We now destroy the diaphragm and let the gases spon- 
taneously mix. According to our earlier statements the entropy should 
increase, since the mixing is an irreversible process of an isolated 
system. The total internal energy U of the combined system finally 
has the same value as the sum of the U's of the two parts initially. 
What has happened is that the type 1 gas has expanded its volume from 
aV to V and type 2 gas has expanded from (l-a)V to V. 

To compute the change of entropy, we use (8S/8V)u , which Eq. 
(6-12) shows is P/T, which is nR/V for a perfect gas. Thus the 
increase of entropy of gas 1 is cmR times the logarithm of the ratio 
between final and initial volumes occupied by gas 1, with a similar ex- 
pression for gas 2. The total entropy increase, called the entropy of 
mixing of the two gases, is 



AS = nR 



ot In 



a 



+ d 



a ^{i^) 



(6-15) 



50 THERMODYNAMICS 

which is positive for < a < 1. It is largest for a = 1/2, when equal 
mole quantities of the two gases are mixed. We note that for perfect 
gases, P and T of the final state are the same as those of the initial 
state; for nonperfect gases P and T change somewhat during the 
mixing (why?). 

Entropy increase is to be expected when two different gases are 
mixed. But what if the two gases are the same ? Does the removal of 
a diaphragm separating two parts of a volume V, filled with one sort 
of gas, change its entropy or not ? When the diaphragm is in place a 
molecule on one side of it is more restricted in its travel than when 
the diaphragm is removed; but this difference is unnoticeable macro- 
scopically. Does reinsertion of the diaphragm reduce the entropy 
again? We must postpone the resolution of this paradox (called Gibbs' 
paradox) until we treat statistical mechanics (Chapter 22). 




Simple 

Thermodynamic 
Systems 



We have already worked out the thermal properties of a perfect gas. 
According to Eq. (6-9) its total energy is independent of volume for, 
since P = nRT/V, 

(3U/9V) T = T(3P/8T) V - P = 

We note in passing that the T in Eq. (6-9) is the thermodynamic tem- 
perature, so that the T in the perfect gas law PV = nRT is likewise 
the thermodynamic temperature. Thus our definition of a perfect gas 
must include the statement that the T in the equation of state (3-1) is 
the thermodynamic temperature. 

The Joule- Thomson Experiment 

To see how nearly actual gases come to perfect gases in behavior 
we can use the free expansion illustrated in Fig. 6-2. The measure- 
ment of temperature change in free expansion is called Joule's exper- 
iment and the partial derivative which is thus measured, n(9T/aV)jj, 
is called the Joule coefficient of the gas. We note from Eqs. (6- 10) and 
(6-11) that the Joule coefficient for a perfect gas is zero and that for 
a gas obeying Van der Waal's equation it is -an 2 /V 2 C v , a small quan- 
tity for most gases. 

As with the transformation of heat into work, we can devise a con- 
tinuous process corresponding to free expansion, as illustrated in Fig. 
7-1. Gas is forced through a nozzle N by moving pistons A and B to 
maintain a constant pressure difference across the nozzle (or we can 
use pumps working at the proper rates). The gas on the high-pressure 
side is at pressure P and temperature T ; after going through the 
nozzle it settles down to a pressure P x and temperature T r Suppose 
we follow n moles of the gas as it goes through the nozzle, starting 
from a state of equilibrium at P , T and ending at the state of 
equilibrium 1 at P lt T 2 . We cannot follow the change in detail, for the 
process is irreversible, but we can devise a reversible path between 

51 



52 



THERMODYNAMICS 




B 



FIG. 7-1. The Joule- Thompson experiment. 

and 1 [or, rather, we can let Eq. (6-7) find a reversible path for us] 
which will allow us to compute the difference between states and 1. 
In particular, we can compute the temperature difference T x - T in 
order to compare it with the measured difference. 

This process differs from free expansion because the energy U 
does not stay constant; work P V is done by piston A in pushing the 
gas through the nozzle, and work P 1 V 1 is done on piston B by the time 
the n moles have all gotten through. Thus the net difference in inter- 
nal energy of the n moles between state and state 1 is U x - U 
= P V - PjVj. Instead of U remaining constant during the process, 
the quantity 



H = U + PV 



(7-1) 



is the same in states 1 and 0. This quantity, called the enthalpy of 
the gas, is an extensive state variable. 

The experiment of measuring the difference T x - T , when all parts 
of the system shown in Fig. 7-1 are thermally insulated, is called the 
Joule-Thomson experiment, and the relevant change in temperature 
with pressure when H is kept constant, (8T/3P)h, is the Joule- 
Thomson coefficient of the gas. Experimentally, we find that for ac- 
tual gases, the temperature increases slightly at high temperatures; 
at low temperatures the temperature T x is a little less than T . The 
temperature at which (3T/9P) H = is called the Joule- Thomson in- 
version point. Continuous processes, in which a gas, at a temperature 
below its inversion point, is run through a nozzle to lower its temper- 
ature, are used commercially to attain low temperature. We use work 
P V - PiV\ to cool the n moles of gas, so Clausius' principle is not 
contradicted. 

To compute the Joule- Thomson coefficient we manipulate the equa- 
tion for H, or rather its differential, 



SIMPLE THERMODYNAMIC SYSTEMS 



53 



dH = dU + P dV + V dP = T dS + V dP 



(7-2) 



as we did in Eq. (6-7) to obtain Eq. (6-10). First we note that the 
change in heat T dS in a system at constant pressure equals dH, so 
that Cp, the heat capacity of the system at constant pressure, is 
(8H/8T)p, in contrast to Eq. (4-4). Just as internal energy U can be 
called the heat content of a system at constant volume, so enthalpy H 
can be called its heat content at constant pressure. In passing, we can 
combine Eqs. (4-6) and (6-9) to express the difference between Cp and 
C v for any system in terms of partials obtainable from its equation of 
state , 



C p - C v = T(aV/8T) p (3P/dT) v 
Next we manipulate Eq. (7-2) as we did Eq. (6-7), to obtain 

p - — p - \aP/ T .vojt 



(7-3) 



/as_\ = i/aH\ =^p. (&L\ = J_r/an\ i = /av\ 
\BT) n tKbtL t' VaP/ T t LUp/t J V9T/ 



p 

(7-4) 



and 



(3C p /8P) T = -T0 2 V/3T 2 ) p 

From this we can compute the Joule- Thomson coefficient and the 
change in entropy during the process, 

(i) H = -iS^ = ^[ v - T (f?) p ] and (S) H = 



f 

(7-5) 



For a perfect gas V = T(3V/8T) p , so that (3H/aP) T = and the Joule- 
Thomson coefficient (3T/8P)jj is also zero; no change in temperature 
is produced by pushing it through a nozzle. The change in entropy of a 
perfect gas during the process is the integral of -V/T = -nR/P 
with respect to P, 



AS = nR hHPo/Pj 



(7-6) 



Since P > P x this represents an increase in entropy, as it must. 

For a gas obeying Van der Waal's equation we can find (3V/8T) p 
by differentiating the equation of state and manipulating, 

2an 2 



(V - nb) dP + 



nRT 
V-nb 



(-) 
\8T/ T 



(V - nb) 



X - 



V 3 

2an ^V 
V 3 



(V - nb) 
nb) 2 



dV = nR dT 



RT 



-(V 



nb) (l + 



2an \ 

RTV/ 



54 THERMODYNAMICS 

so that 



(i) H -J; (if - nb ) 



since a and b are small quantities. For this same reason T and 
thus C p do not change much during the process, and we can write 



Wo^(l-%)(Po-P>) 



Since P > P x this predicts an increase in temperature during the 
Joule- Thomson process if T > 2a/Rb, a decrease if T < 2a/Rb, 
the inversion temperature being approximately equal to 2a/Rb. 

The corresponding increase in entropy during the process can be 
computed, using the same approximations as before, 

AS^n R ln(^)-^(P -P 1 )(l-^) (7-7) 

which is to be compared with the result of Eq. (7-6) for a perfect gas. 

Black- Body Radiation 

Let us now turn to a quite different sort of system, that called 
black-body radiation, electromagnetic radiation in equilibrium with 
the walls of an enclosure kept at temperature T. This is the radia- 
tion one would find inside a furnace with constant-temperature walls. 
It consists of radiation of all frequencies and going in all directions, 
some of it continually being absorbed by the furnace walls but an equal 
amount continually being generated by the vibrations of the atoms in 
the walls. This is a special kind of system, with special properties. 

In the first place the energy density e of the radiation, the mean 
value of (1/2) & • £) + (1/2) 3C • (B, depends on the temperature but is 
independent of the volume of the enclosure. If the volume inside the 
furnace is enlarged, more radiation is generated by the walls, so that 
the energy density e remains the same. Therefore the total electro- 
magnetic energy within the enclosure, Ve(T), is proportional to the 
volume. This is in contrast to a perfect gas, for which U at a given 
temperature is constant, independent of volume. If the enclosure is 
increased in volume the density of atoms diminishes and so does the 
energy density; for radiation, if the volume is increased more radia- 
tion is produced to keep the density constant. We can, of course, con- 
sider the radiation to be a gas of photons, each with its own energy, 
but the contrast with atoms remains. Extra photons can be created to 
fill up any added space; atoms are harder to create. 



SIMPLE THERMODYNAMIC SYSTEMS 55 

In the second place the radiation pressure, the force exerted per 
unit area on the container walls, is proportional to the energy density. 
In this respect it is similar to a perfect gas of point atoms, but the 
proportionality constant is different. For the gas [see Eq. (2-4)] 
P = (2/3)[N<K.E.> tran /V] = (2/3 )e, where e is the energy contained 
per unit volume; for radiation it is (1/3 )e. The difference comes from 
the fact that the kinetic energy of an atom, (l/2)mv 2 , is one-half its 
momentum times its velocity. For a photon, if its energy is ha;, its 
momentum is (tiaj/c) where c is its velocity, (a;/27j) its frequency, 
and h = 27Tn is Planck's constant; therefore the energy of a photon is 
the product of its velocity and its momentum, not half this product. 
Since pressure is proportional to momentum, e = 3P for the photon, 
= (3/2)P for the atom gas. 

To compute U, S, and P as functions of T and V we start with the 
basic equation again, dU = T dS - P dV, inserting the appropriate ex- 
pressions, U = Ve(T) and P - (l/3)e(T), 



T ds " T (ll) v aT + T (lv) T av ■ "" * p av 



1 + e dV + |e dV 



or 



(d§_\ = V /de\ (dS_\ = 4 e_ 

V3T/ V T \dT/' \dV) T 3 T 

Applying Eq. (3-10) we obtain a differential equation for e(T), 

1 (d±\ = i 1 (de_\ _ i _£_ de_ = 4 e_ 

TVdT/ 3 T\dT/ 3 T 2 ° r dT T 



which has a solution e(T) = aT 4 , so 

U = aVT 4 ; P=|-aT 4 ; S = |aVT 3 (7-8) 

o o 

The equation for the energy density of black-body radiation is called 
Stefan's law and the constant a is Stefan's constant. In statistical 
mechanics [see Eq. (25-9)] we shall evaluate it in terms of atomic 
constants. 

We see that the energy of black-body radiation goes up very rapidly 
with increase in temperature. Room temperature (70° F) is about 
300° K. At the temperature of boiling water (373° K) the energy density 
of radiation is already 2 1/2 times greater; at dull red heat (920° K) it 



56 THERMODYNAMICS 



is 100 times that at room temperature. At the temperatures encoun- 
tered on earth the pressure of radiation is minute compared to usual 
gas pressures. At temperatures of the center of the sun (10 7o K) the 
radiation pressure supports more than half the mass above it. 

Reference to Eq. (6-3), U = ST - PV + jiin shows that the chemical 
potential of black-body radiation is 

jll = (l/n)(U - ST + PV) = (l/n)(aVT 4 - | aVT 4 + | aVT 4 J = 

which is related to the freedom with which the number of moles of 
photons adjusts its value to keep the energy density constant [see Eq. 
(25-2)] . In this matter, also, photons are a special kind of gas. 

Paramagnetic Gas 

Finally let us work out the behavior of a paramagnetic, perfect gas, 
Here the two equations of state are P = nRT/V and 5C = Tsm/nD. 
The heat capacity at constant V and 371 is a constant. C y3Tl = (3/2)nR 
for a monatomic gas. The basic equation is 

dU = T dS - P dV + JC d3TC 

Three independent variables must be used; we first use T, V, and 971. 
By methods that should be familiar by now we find 



T (as\ = 3 nR /am 

/as\ /_au\ _ 3C= _ T (dx\ 
\a37i/ TV \3gn/ TV VaT/o^ 

Other manipulations result in 

(au/av) Tgll = (au/a97i) TV = o 

Cpx= C V3 k + P(||) p - *(f?) x = fnR ♦ (^/nD) 

Integrating the partials for U and S, we obtain 

"V_/_T_\3/ 
VoAtJ 



U = |nRT; S = nR In 



& +S « (7 " 10) 



SIMPLE THERMODYNAMIC SYSTEMS 57 



The "natural variables" in terms of which to express U are S, V, M 
rather than T, V, 9H. To do this we express T in terms of S, V, 9H 
and obtain 

[see Eq. (6-5) et seq.] . 

The extensive variables S, V, SfTC are less easy to measure than 
are the intensive variables 

P = -0U/3V) Sgrc ; T = OU/8S) V3TZ ; H = OU/83tt) sv 

which might be called the "experimental variables." Expressing the 
basic equation in terms of these (using P dV = -(nRT/P)dP + nR dT, 
for example) we have 

TdS = (fnR + nD|£)dT-S5IdP-252SdX ( 7 _ 12 ) 

For isothermal operation, dT is zero. The heat contributed to the 
gas by the reservoir at temperature T, when the gas pressure is 
changed from P to P x and the magnetic intensity is changed from 
3C to flfCj, is 

AQ 01 = nRT lnCPo/Pj + (nD/2T)(3C^ - 3Cj) (7- 13) 

Increase in pressure squeezes out heat (aQ < 0) as does an increase 
in magnetization. At low temperatures a change in magnetic field pro- 
duces more heat than does a change of pressure. 

The behavior of the system during an adiabatic process can be 
computed by setting T dS = 0. The integrating factor is 1/T and the 
integral of 



(!?+■>¥)" 



T 2 
is 



nD3C , nR ,„ n 

dx =- dP = 



©(r-[i(^-f]=' 

If the magnetic field is kept constant (3C 1 = 3C ), the gas undergoes 
ordinary adiabatic compression and T x is proportional to the two- 
fifths power of the pressure P 2 . Or, if the pressure is kept constant, 
the temperature is related to the magnetic field by the formula 



58 THERMODYNAMICS 

3C» = T|[(3C|/T§) + (5R/D) In (VT )] (7-15) 

At low-enough initial temperatures or high-enough initial fields, so 
that (3C /T ) 2 ^> 5R/D , the final temperature T l is approximately 
proportional to the final magnetic intensity 3C X ; an adiabatic reduction 
of 3C proportionally lowers the temperature. 

Finally, we can have a process which is both adiabatic and isother- 
mal if we adjust the pressure continually so that, as the magnetic field 
is changed adiabatically, the temperature is kept at a constant value 
T (the volume then changes inversely proportional to P). The rela- 
tion between JC 1 and P x that will keep T constant is 



P 1 = P exp 



2RT 2 13L 3L t ; 



(7-16) 



As 3C X decreases, T? 1 will have to be increased exponentially to keep 
T constant. In this process, mechanical work is used to demagnetize 
the material. 

Actual magnetic materials are not perfect gases, nor does their 
magnetic equation of state have exactly the form of the simple Curie 
law. In these cases one tries to find equations of state that do fit the 
data and then tries to integrate Eq. (7-12). Failing this, numerical 
integration must be used to predict the thermal properties of the ma- 
terial. 




The 

Thermodynamic 
Potentials 



In connection with Eq. (6-1) we pointed out that U, the internal en- 
ergy, when expressed in terms of the extensive variables S, V, L, 
9TC, n^, etc., behaves like a potential energy, in that the thermodynamic 
"forces," the intensive variables, are expressible as gradients of U, 

-(§)„ „■ — (f?L= -(!?)„..= -• 

(8-1) 
as was indicated in Eq. (6-2). 

The Internal Energy 

Moreover if we include irreversible as well as reversible proc- 
esses, Eqs. (4-1), (6-1), and (6-5) show that 

dU = 3Q - P dV + /i dn + J dL + JC d3U + ••• 

< T dS - P dV + jll dn + J dL + JC dgn + ••• (8-2) 

where the equality holds for reversible processes, the inequality for 
irreversible ones. Consequently if all the extensive variables are 
held constant (dS - dV = dL = ••• = 0) while the system is allowed to 
come spontaneously to equilibrium, then every change in U will have 
to be a decrease in value; it will only stop changing spontaneously 
when no further decrease is possible; and thus at equilibrium U is 
minimal. In other words, when the extensive variables are fixed, the 
equilibrium state is the state of minimal U. 

It is usually possible (although not always easy) to hold the mechan- 
ical extensive variables, V, L, n, arc, etc., constant, but it is more 
difficult to hold entropy constant during a spontaneous process. How- 
ever if we allow the process to take place adiabatically, with ftQ = 



59 



60 THERMODYNAMICS 

(and hold dV = dL = dn = ••• = 0), then U does not change during the 
process and T dS ^ 0, so that S will reach a maximum value at equi- 
librium. If we had tried to keep S constant during the spontaneous 
process we would have had to withdraw heat during the process, 
enough to cancel out the gain in S during the process; this would have 
decreased U, until at equilibrium U would be minimal, less than the 
original U by the amount of heat that had to be withdrawn to keep S 
constant. 

When all the mechanical extensive variables are held constant, an 
addition of heat to the system produces a corresponding increase in 
U. We thus can call U the heat content of the system at constant V, 
L, n, ••• , and can write the heat capacity 

C vn ... = 0U/3T) vn ... (8-3) 

the subscripts indicating that all the extensive mechanical variables, 
which apply to the system in question, are constant. Moreover we can 
apply Eq. (3-10) to the various intensive variables of Eq. (8-1), obtain- 
ing a very useful set of relationships, 



til) =(m . (21) (*n) 

(dT_\ = (dx\ _ /8P\ = (djA 

W sv ... \3s; vgri ...' Van/gv... \sv; Sn ... 

which are called Maxwell's relations. The relations, of course, hold 
for reversible processes. They are used to compute the differences 
in value of the various thermodynamic variables between two equilib- 
rium states. 

Enthalpy 

Although the extensive variables are the "natural" ones in which 
to express U, they are often not the most useful ones to use as inde- 
pendent variables. We do not usually measure entropy directly, we 
measure T; and often it is easier to keep pressure constant during a 
process than it is to keep V constant. We should look for functions 
that have some or all of the intensive variables as the "natural" ones. 
Formally, this can be done as follows. Suppose we wish to change 
from V to P for the independent variable. We add the product PV to 
U, to generate the function H = U + PV, called the enthalpy [see Eq. 
(7-1)] . The differential is 



THE THERMODYNAMIC POTENTIALS 



61 



dH = P dV + V dP + dU 



T dS + V dP + /! dn + 3C dm + 



(8-5) 



where again the equality holds for reversible processes, the inequality 
for irreversible ones. If a system is held at constant S, P, n, ..., the 
enthalpy will be minimal at equilibrium (note that P, instead of V, is 
held constant). If H is expressed as a function of its "natural" coor- 
dinates S, P, n, ..., then the partials of H are the quantities 



Us) Pn ... T; Up) Sn ... V; UJ SP ... M 



and the corresponding Maxwell's relations are 



(§L 



' 3S /pn-. : 



/av 

\3 arc/si; 



L..-( 



ajc\ 



8P/, 



S3TT 



etc. 



(8-6) 



etc. 



(8-7) 



Geometrically, the transformation from U to H is an example of a 
Legendre transformation involving the pair of variables P, V. Func- 
tion U(V), for a specific value of V (i.e., at point Q in Fig. 8-1) has 



\>>- 


,x ] Q 

f / 



V 

FIG. 8-1. Legendre transformation from U as a function 
of V to H as a function of P 



a slope dU/dV = -P(V), which defines a tangent, HQ of Fig. 8-1, 
which has an intercept on the U axis of H = U + PV. (We are not con- 



62 THERMODYNAMICS 

sidering any other variables except P, V, so we can use ordinary de- 
rivatives for the time being.) Solving for V as a function of P from 
the equation dU/dV = -P, we can then express H as a function of 
the slope -P of the tangent line. Since dU = -P dV and dH = dU + 
P dV + V dP = V dP we see that dH(P)/dP = V(P). Thus enthalpy H 
is the potential that has P as a basic variable instead of V. 

Since any addition of heat T dS to a system when P, n, ... are held 
constant causes a like increase of H, the enthalpy can be called the 
heat content of the system at constant pressure. 

The Helmholtz and Gibbs Functions 

At least as important as the change from V to P is the change 
from S to T. Adiabatic (constant S) processes are encountered, of 
course, but many thermodynamic measurements are carried out at 
constant temperature. The Legendre transformation appropriate for 
this produces the Helmholtz function F = U - TS, for which 

dF < -S dT - P dV + J dL + \i dn + 3Cdgn+ ••• (8-8) 

(I) vn ...-* ML-A 8S W ...-" "" 

The related Maxwell relations include 

(i) Tn ... = (ff) Vn ... ; © TV ... = -(!f) v:ni ... ; etc - (8 " 9) 

Since the heat capacity C Vg]l ... is equal to T(aS/aT)yc JTl ... [see Eq. 
(6-8)] , by differentiating these relations again with respect to T we 
obtain a set of equations 



(^ C V3n-L .. - T (|?) 



T3TC-" V01 'V 

~ C ^)=- T (^L etc - (8 - 10) 



[d^l^^'-ljy... 



971 



which indicate that the heat capacity is not completely independent of 
the equations of state. The dependence of Cyan... on T is something 
we must obtain directly by measurement; its dependence on the other 
variables, V, arc, etc., can be obtained from the equations of state. 

By the same arguments as before, the inequality of Eq. (8-8) shows 
that for a system in which T, V, n, an, etc., are held constant, the 
Helmholtz function F is minimal at equilibrium. If T is held con- 
stant, any change in F can be transformed completely into work, such 



THE THERMODYNAMIC POTENTIALS 63 

as -P dV or 3C dsm, etc. Thus F is sometimes called the free en- 
ergy of the system at constant temperature. The Maxwell relations 
(8-9) are particularly useful. For example, the second equation shows 
that, since an increase of magnetic polarization increases the order- 
liness in orientation of the atomic magnets and hence entropy (a meas- 
ure of disorder) will decrease as 9TC increases, therefore the mag- 
netic intensity 3C required to produce a given magnetization arc will 
increase as T is increased. Similarly we can predict that since rub- 
ber tends to change from amorphous to crystal structure (i.e., be- 
comes more ordered) as it is stretched, therefore the tension in a 
rubber band held at constant length will increase as T is increased. 
Another potential of considerable importance is the Gib bs function 
G = U+PV-TS=F + PV, having T and P for its natural coordi- 
nates, instead of S and V, and for which 

dG < -S dT + V dP + /i dn + J dL + JC d 201 + ••• (8-11) 

The Gibbs function is a minimum at equilibrium for a system held at 
constant T, P, n, 9TC, ... , a property that we will utilize extensively in 
the next two chapters. The Maxwell relations include 

(§L..=-dL.. (f )„...= (§Lj *- '»-» 

from which we can obtain the dependence of the heat capacity Cp^ ... 
on the mechanical variables 

fe c p^-) T3rr ...= - T (S)p^ etc - (8 - i3) 

This process can be continued for all the mechanical variables, 
obtaining a new potential each time, and a new set of Maxwell rela- 
tions. For example there is a magnetic Gibbs function G m = U - 
TS + PV - 3C am . From it we can obtain the following: 

= (^k\ (<FL\ = _/M\ . etc 

TP... \3T/ pac ...' Vaoc/ TP ... V3P/TJC...' 

(8-14) 

Finally there is what is called the grand potential £2, obtained by a 
Legendre transformation from n to jj. as the independent variable, 
which is useful in the study of systems with variability of number of 



64 



THERMODYNAMICS 



particles, such as some quantum systems exhibit: 

fi = U - TS - jLin; dO < -S dT - P dV - n d^ + J dL + 



from which the following Maxwell relations come: 



/as \ = (dn_ 

Um/tv- \ dT ' 



Vli 



W/ TV ... \dVj 



Tm 



(8-15) 



etc. (8-16) 



Procedures for Calculation 

A technique for remembering all these relationships can be worked 
out for simple systems, such as those definable in terms of T and V, 
with n constant. Here, when we use Euler's equation (6-3), 

U = ST - PV + /in; H = U + PV = ST + /in 

F = U - ST = -PV + j^n; G - F + PV = /in (8- 17) 

The mnemonic device is shown in Fig. 8-2. It indicates the natural 
variables for each of the four potentials (S and V for U, etc.) and the 
arrows indicate the basic partial derivative relations, with their signs, 




FIG. 8-2. Diagram relating the thermodynamic potentials 
with their natural variables, their partials, and 
the partials of these variables (Maxwell rela- 
tions) for a simple system. 

such as (aH/3P) s = +V and (3F/3T)y = -S. It also indicates the na- 
ture of the various Maxwell relations, such as (aS/3V) T = (3P/8T)y 



THE THERMODYNAMIC POTENTIALS 65 

or (3S/8P) T = -(3V/3T)p; the arrows this time connect the numera- 
tor of the partial derivative with the subscript on it and the directions 
of the arrows again indicate the sign. 

Using this device we are now in a position to formulate a strategy 
for expressing any possible rate of change of a thermodynamic vari- 
able in terms of the immediately measurable quantities such as heat 
capacity and an equation of state, or else the empirically determined 
partials (8V/dT)p and (3V/8P)x from which we could obtain an equa- 
tion of state. Either C v or C p can be considered basic (C p is the 
one usually measured), 

^ =©,-(§),: Cp; (§)„-(§)„ ->•> 

The relation between them is given in terms of partials from the 
equation of state [see Eq. (7-3)] , 

C p = C v + T(8P/8T) V (8V/3T) p (8- 19) 

The various tactics which can be used to express an unfamiliar 
partial in terms of an immediately measurable one are: 

(a) Replacing the partials of the potentials with respect to their ad- 
joining variables in Fig. 8-2 by the related variables, such as 
(3F/3T) V = -S or (3U/3S) V = T, etc. 

(b) Replacing a partial of a potential with respect to a nonadjacent 
variable, obtainable from its basic equation, such as dF = -S dT - 
P dV, from which we can get (aF/3V)s = -S(3T/aV) s - P, and 
(3F/8S) p = -S(3T/aS) p - P(3V/aS) p , etc. 

(c) Using one or more of the Maxwell relations, obtainable from 
Fig. 8-2. 

(d) Using the basic properties of partial derivatives, as displayed 
in Eqs. (3-9) and (3-10). 

In terms of these tactics, the appropriate strategies are: 
1. If a potential is an independent variable in the given partial, 
make it the dependent variable by using (d) (this process is called 
bringing the potential into the numerator) and then use (a) or (b) to 
eliminate the potential. Examples: 






1 



(3U/3T) V C v 



_/8T\ (3U/aV)T _ T (dS\ _ _P 

\dV) v (8U/3T) V C v \dVj T C v 

2. Next, if the entropy is an independent variable in the given par- 



66 THERMODYNAMICS 



tial or in the result of step 1, bring S into the numerator and elimi- 
nate it by using (c) or Eq. (8-18). Examples: 

(dP\ (dP/dT) v _t (dP\ 
\dS) v OS/8T) v C v \dT) Y 

_ (BT\ QS/3V) T = _T /3P\ 
\dVJ s (3S/8T) V C v Ut/ v 

3. If the measured equation- of- state partials have V in the numer- 
ator, bring V into the numerator in the result of steps 1 and 2 by us- 
ing (d). If the equation of state is in the form P = f(V,T), bring P into 
the numerator. 

The result of applying these successive steps will be an expression 
for the partial of interest in terms of measured (or measurable) quan- 
tities. It is obvious that if other variables, such as 9TC and X instead 
of V and P , are involved, a square similar to Fig. 8-2 can be con- 
structed, which would represent the relationships of interest, and pro- 
cedures 1 to 4 can be carried through as before. 

Examples and Useful Formulas 

As one example, we shall anticipate a problem discussed in Chap- 
ter 9. There we shall find it enlightening to plot the Gibbs function G 
as a function of T for constant P. We can obtain such plots by inte- 
grating (3G/3T)p = -S or, since S is not a directly measured quan- 
tity, by the double integration of -(aS/8T) p = -C p /T with respect 
to T. In the case of a gas, where C v is a function of T alone, we 
must then express Cp as a function of T and P, so it may be inte- 
grated easily. Thus several applications of the procedures outlined 
above result in 

(FG\ Cp C V (T) _ (dV\ (BP\ _C V [(8V/3T) p ] 2 

\8T 2 /p T T \aTy p VaT/ v T (dV/3P) T 



so 



7 T fC v [OV/3T) D ] 2 

G = G (P) - (T- T )S (P) - / dT J \^ - (g v /a p^ 



T T c 



dT 

(8-20) 



Since S (P), the entropy at temperature T at the constant P is a 
positive quantity; the plot of G against T for constant P has a nega- 
tive slope. Furthermore, since C v is positive and (3V/3P)t is neg- 
ative for all T and P, the slope of G becomes more negative as T 
increases; the plot curves downward (see Figs. 9-2 and 9-6). 



THE THERMODYNAMIC POTENTIALS 67 

Finally, and in part to collect some formulas for useful reference, 
let us work out again the entropy and then the thermodynamic poten- 
tials for a perfect gas of point atoms, from its equation of state PV 
= nRT and its heat capacity C v = (3/2)nR. To find the entropy we use 
(3S/3T) V = C v /T = 3nR/2T, and also (aS/9V) T = OP/8T) v = nR/V, 
so that 

S = fnR In (T/T ) + nR In (V/V ) + ns 

as per Eq. (6-5). To find the Helmholtz function, and thence the other 
potentials, we use (3F/8T) V = -S and (3F/3V) T = -P, so that 

F = |nRT - |nRT In (T/T ) - ns T - nRT In (V/V ) 

U = F + TS = |nRT = |nRT (V /V) 2/3 exp [2(S - ns )/3nR] 

H = U + PV - |nRT - |nRT (T P/P T) 2 / 3 exp[2(S - ns )/3nR] 

(8-21) 
G = F + PV - (|nR - ns ) T - |nRT In (T/T ) + nRT In (V/V ) 

= njLL= f|nR-ns W - |nRT ln(T/T ) + nRT ln(P/P ) 

£1 = F - n/x = (|nR - ns ^T - |nRT In (T/T ) - nRT In (V/V ) - n/x 

where we have utilized the fact that U = when T = to fix the value 
of the constant of integration F . These can be checked with Eqs. (8-1) 
to (8-15) for self-consistency. 

We note that the chemical potential per mole of the gas is a func- 
tion of the intensive variables T and P only and is thus independent 
of n (as it must be). In fact we can see that all the thermodynamic 
potentials are n times a function which is independent of n, for 
V/V and T/T are both independent of n. This general property 
of thermodynamic potentials must be true for any system involving a 
single component (why?). 




Changes of Phase 

Every substance (except helium) is a solid at sufficiently low tem- 
peratures. When it is in equilibrium it has a crystalline structure 
(glasses are supercooled liquids, not in equilibrium). As has already 
been mentioned and will be discussed later in some detail, entropy is 
a measure of randomness, so it will not be surprising to find that the 
entropy of a perfect crystal vanishes at absolute zero. (This state- 
ment is sometimes called the third law of thermodynamics.) 

The Solid State 

To see how the entropy and the thermodynamic potentials of a solid 
change as the temperature is increased, we shall utilize the simplified 
equation of state of Eq. (3-6), 

V = V (l+j3T -kP); P = (j3A)T - [(V-V )/kV ] (9-1) 

Its heat capacities go to zero at zero temperature (see Fig. 3-1) and 
C v rises to 3nR at high temperatures. The capacity Cp is related to 
C v according to Eq. (7-3). A simple formula that meets these require- 
ments is 

C v = [3nRT 2 /(S 2 + T 2 )]; C p = C v + (/3 2 V /k)T (9-2) 

The formulas do not fit too well near T = 0, but they have the right 
general shape and can be integrated easily. Constant 6 usually has a 
value less than 100° K so that by room temperature C v is practically 
equal to 3nR. 

Using Eqs. (7-4) we compute the entropy as a function of T and P, 
remembering that S = when T = and P = 0, 

S = |nR ln[l + (T 2 /0 2 )] + (B 2 V /k)T - /3V P (9-3) 

68 



CHANGES OF PHASE 



69 



This formula makes it appear that S can become negative at very low 
temperatures if P is made large enough. Actually /3 becomes zero 
at T = 0, so that at very low temperatures S is independent of pres- 
sure. At moderate temperatures and pressures Eq. (9-3) is valid; 
curves of S as function of T for different values of P are shown in 
Fig. 9-1. 




FIG. 9-1. Solid lines plot entropy of a solid as a function 
of T and P. 

Next we use Eqs. (8-11) to compute the Gibbs function G as func- 
tion of T and P, 

G = -f nRT In fl + |£j + 3nRT - 3nR<9 tan" 1 (T/0) 

- (V /3 2 /2k)T 2 + V p[l + j3T - |kPJ + U (9-4) 

where U is a constant of integration. Typical curves for this func- 
tion are plotted in Fig. 9-2 for different values of P. We note that G 
is nearly constant at low temperatures, dropping somewhat at higher 
temperatures, as indicated by Eq. (8-20). 

Melting 

If we add more and more heat (quasistatically) to the crystalline 
solid, holding the pressure constant at some moderate value, its tern- 



70 



THERMODYNAMICS 



solid P = k/50 


s 


^V solid P = ,/mnn 




solid p = io~ 5 /c " "^!* 

I 
I 
I 

I ! 



9 



FIG. 9-2. Solid lines plot Gibbs function of a solid as a 
function of T and P. 



perature rises until finally it melts, turning into a liquid with none of 
the regularities of the crystal. During the melting, addition of more 
heat simply melts more crystal; the temperature does not rise again 
until all is melted. The temperature T m at which the melting occurs 
depends on the pressure, and the amount of heat required to melt 1 
mole of the crystal L m is called the latent heat of melting (it also is 
a function of the pressure). 

We wish to ask how thermodynamics explains these facts, next to 
find whether it can predict anything about the dependence of T m , the 
temperature of melting, on P, and of the latent heat of melting L m 
on any of the thermodynamic quantities. 

The answer to the first question lies in the discussion of Eq. (8-11), 
defining the Gibbs function. At any instant during the quasistatic proc- 
ess of heat addition, the temperature and pressure are constant; thus 
the material takes up the configuration which has the lowest value of 
G for that T and P. Below T m the G for the solid is less than the 
G for the liquid; above T m the liquid phase has the lower value of G; 
if the process is carried out reversibly, all the material must melt at 
the temperature T m (P), at which the two G's are equal (for the pres- 
sure P). 

If we could supercool the liquid to T = we would find its entropy 
to be larger than zero, because a liquid is irregular in structure. Fur- 
thermore the entropy of the liquid increases more rapidly with tem- 
perature than does the S for the solid. Since (8G/8T)p = -S, even 
though at T = the G for the liquid is greater than the G for the 



CHANGES OF PHASE 71 

solid, it will drop more rapidly as T increases (see the dotted line of 
Fig. 9-2) until, at T = T m the two G's are equal; above T m the liquid 
has the lower G and is thus the stable phase. Thus thermodynamics 
explains the sudden and complete change of phase at T m . 

The second question, raised earlier, is partly answered by pointing 
out that heat must be added to melt the material and that, at constant 
T an addition of heat ttQ corresponds to an increase in entropy of the 
substance by an amount tTQ/T . Thus the n moles of liquid, at the 
melting point T m , has an entropy nL m /T m greater than the solid 
at the melting point, where nL m is the heat required to melt n moles 
of the solid (at the specified P). Thus a measurement of the latent 
heat of melting L m enables one to compute the entropy of the liquid 
at T m , in terms of the entropy of the solid (which is computed by in- 
tegration from T = 0), and further integration enables one to compute 
S for the liquid, for T greater than T m , knowing the heat capacity 
and the equation of state of the liquid. 

Clausius-Clapeyron Equation 

To answer the rest of the second question we utilize the fact that 
at the melting point the Gibbs function G s (T m ,P) for the solid equals 
the Gibbs function Gi(T m ,P) for the liquid, no matter what the pres- 
sure P. In other words, as we change the pressure, the temperature 
of melting T m (P), the Gibbs function G s for the solid and G\ for the 
liquid all change, but the change in G s must equal the change in G^, 
in order that the two G's remain equal at the new pressure. Refer- 
ring to Eq. (8-11) we see that this means that 

dG s = " s s dT + v s dP = dG l = ~ s l dT + v l dP 
or 

(Vj - V s ) dP = (Si - S s ) dT m = (nL m /T m ) dT m 

since the difference in entropy between a mole of liquid and a mole of 
solid is equal to the latent heat divided by the temperature of melting. 
Thus the equation relating T m to P is 

dT m /dP = (T m /nL m )(Vi - V s ) (9-5) 

which is the Clausius-Clapeyron equation. 

If the volume of the liquid is greater than the volume of the solid, 
then an increase of pressure will raise the temperature of melting so 
that, for example, if such a liquid is just above its melting point, an 
increase in pressure can cause it to solidify. Vice versa, if the solid 
is less dense than the liquid (as is the case with ice) an increase of 



72 THERMODYNAMICS 



pressure lowers the melting point and pressure can cause such a solid, 
just below its melting point, to melt. Thus the fact that ice floats is 
related to the fact that ice skating is possible; ice skates ride on a 
film of water which has been liquefied by the pressure of the skate. In 
general, since Vj differs but little from V s , the pressure must be 
changed by several thousand atmospheres to change T m by as much 
as 10 per cent. 

Evaporation 

If now heat is added to the liquid, its temperature will increase un- 
til another phase change occurs— the liquid evaporates. Here again 
the temperature remains constant at the temperature of vaporization 
T v until all the liquid is converted into vapor. To be sure we under- 
stand what has taken place let us examine the process in more detail. 
We have tacitly assumed that the substance, first solid and then liquid, 
is confined in a container which adjusts its volume V so that it exerts 
a pressure P on all parts of the outer surface of the material. In 
other words we have assumed that the volume V is completely filled 
by the substance. 

This may be difficult to do for the solid, but it is not hard to ar- 
range it for the liquid. We provide the container with a piston, which 
exerts a constant force on the liquid and which can move to allow the 
liquid to expand at constant pressure P, and we make sure the liquid 
completely fills the container. In this case the liquid will stay liquid 
while we add heat, until its temperature reaches T V (P), when it must 
all be converted into gas (at a much greater volume but at the same 
pressure) before additional heat will raise the temperature beyond 
T v . The temperature of vaporization T V (P) is related to the pressure 
by another Clausius-Clapeyron equation, 

dT v /dP = (T v /nLy)(V g - V X ) (9-6) 

where here Ly is the latent heat of evaporation per mole of the mate- 
rial (at pressure P), Vj is the volume of the material as a liquid be- 
fore evaporation, and V g is its volume as a gas, after evaporation, at 
T v and P. Since Vg is very much larger than V\, T v changes much 
more rapidly with P than does T m . 

However our usual experience is not with the behavior of liquids 
that entirely fill a container, but with evaporation from the free sur- 
face of a liquid. When a liquid (or solid) does not completely fill a 
container, some of the substance evaporates into the free space until 
there is enough vapor there so that equilibrium between evaporation 
and condensation is reached. This equilibrium is only reached when 
the temperature of the liquid and vapor is related to the pressure of 



CHANGES OF PHASE 73 



the vapor in the free space above the liquid by the functional relation- 
ship we have been writing, T V (P), determined by Eq. (9-6). In the 
case we are now discussing it is better to reverse the functional re- 
lationship and write that the vapor pressure P v is a function of the 
temperature T, and that the equation specifying this relationship is 
the reciprocal of (9-6), 

dP v /dT = [nl^/TCVg - Vi)] (9-7) 

This is the more-familiar form of the Clausius-Clapeyron equation. 

The presence of another kind of gas in the space above the free 
surface of a liquid (or solid) only has an indirect effect on the amount 
of vapor present. The total pressure P on the liquid is now the sum 
of the partial pressures P f of the foreign gas and P v of the vapor. 
An addition of enough more of the foreign gas to increase this total 
pressure by dP (keeping T constant) will increase the Gibbs func- 
tion of the liquid by dG x = V\ dP [see Eq. (8-11); dT = 0] , but the 
Gibbs function of the same amount of material in gaseous form is not 
affected by the foreign gas, so dGg = Vg dP v . For liquid and vapor to 
remain in equilibrium dGj must equal dGg; consequently the relation 
between vapor pressure P V (P,T) in the presence of a foreign gas and 
the total pressure P is given by 

dP v /dP = Vi/Vg 

which may be integrated from the initial state where no foreign gas 
is present [P = P v and P v is the solution of Eq. (9-6)] to the final 
state where P = Pf + P v . Since Vg is so much larger than Vi, P v 
changes very little as the foreign gas is added, but what change there 
is, is positive. Addition of foreign gas squeezes a little more vapor 
out of the liquid, rather than pushing some vapor back into the liquid. 

Water in a dish open to the air is not in equilibrium unless the par- 
tial pressure of water vapor in the air happens to be exactly equal to 
the vapor pressure P V (T) for the common temperature T of air and 
water (this is the condition of 100 per cent humidity). If the common 
temperature is above this, the water continues to evaporate until it is 
all gone, the evaporation proceeding more rapidly as the temperature 
is raised, until the boiling point is reached, when P V (T) is equal to 
the total atmospheric pressure; the gas immediately above the water 
is all water vapor, and the water boils rapidly away. 

The latent heats of evaporation are usually 10 to 50 times greater 
than the corresponding latent heats of melting, corresponding to the 
fact that it takes much more work to pull the material into a tenuous 
vapor than it does to change it into a liquid, which disrupts the crys- 
tal structure but doesn't pull the atoms much further apart. 



74 



THERMODYNAMICS 



At very high pressures there are also phase changes in the solid 
state; the crystal structure of the solid changes, with accompanying 
latent heat, change of volume, and relationship between P and T for 
the change given by an equation such as (9-6). 



Triple Point and Critical Point 

We have just seen that the melting temperature is nearly independ- 
ent of pressure, whereas the temperature of vaporization is strongly 
dependent on P. Therefore as P decreases, the two curves, one for 
T v , the other for T m , converge. This is shown in Fig. 9-3, where the 



v 



, 1 1 




/ 


*°# 1 


t 


' s 


^/ 1 


' s 


B 


1&$ 


II 1 VI 

^ 1 <^| 




£/ A/ 


/ 7l 

7 




*7 




/ c / 




/ > 


/ l 




/ / 


/ / 










i/^"> 


1 ' 




yt'' 


' ' 




m* 


t / 1 

/ / 
1 y^ 







FIG. 9-3. Phase diagram for a material that expands upon 
melting. Solid lines are the curves for phase 
change, dashed lines those for constant volume. 



curve AB is the melting-point curve and AC that for vaporization. 
The two meet at the triple point A, which is the only point where solid, 
liquid, and vapor can coexist in equilibrium. Below this pressure the 
liquid is not a stable phase and along the curve OA the solid trans- 
forms directly into the vapor (sublimation). The shape of curve OA is 
governed by an equation similar to (9-6), with a latent heat of sublima- 
tion L s (equal to L m + Ly at the triple point). The dashed lines of 
Fig. 9-3 are lines of P against T for different values of V, intersec- 
tions of the PVT surface by planes parallel to the PT plane. 



CHANGES OF PHASE 



75 



As the pressure is increased, keeping T = T V (P) so that we follow 
the curve AC, the differences (Vg-V"i) and (S g -Si) = (Ly/T v ) be- 
tween gas and liquid diminish until at C, the critical point, there 
ceases to be any distinction between liquid and gas and the curve AC 
terminates. There seems to be no such termination of the curve AB 
for melting; the difference between the regularly structured solid and 
the irregular liquid remains for pressures up to the maximum so far 
attained; it may be that curve AB continues to infinity. 

The PT plane is only one way of viewing the PVT surface repre- 
senting the equation of state. Another sometimes more useful projec- 
tion is the one on the PV plane. In Fig. 9-4 are plotted the dashed 
curves of P against V for different values of T, corresponding to 





Ml 


l B tj 


\ \ 




III 


■ — ■ 


\ \ 




III 




\ \ 




II I 

II I 
II \ 

II \ 








II \ 




\ * \ V\ 




II v 








11 \ 






11 ] 

3 




[ 1 >°^ *V 


1 ra 




\ •s'/cV-.. c ^ 


1 " 


3 


1 "V \ •» ■>. K ^ 


1 » l 
1 » l 
1 » l 


o 

CO 

3 


V \ x * 


1 1 1 


'3 


L _ — — V n„ 


1 » l 
1 » ! 


"1/ \> " ^ 


I 1 » 

\ l 1 


1/ \ v *«^. •*■ 

1/ vapor-liquid \ ^<-. 

1 \ -^ 


\ V 


A A ^ST-- Tj=jT t 


\ / ~^^? ^ -,_ 


\/ vapor-solid ^*****-^J" — — 



FIG. 9-4. PV curves (dashed lines) for the material of 

Fig. 9-3. Solid lines are projections on the PV 
plane of the solid lines of Fig. 9-3. 



intersections of the PVT surface with planes of constant T, parallel 
to the PT plane. The regions where the PV curves are horizontal 
are where there is a phase change. The boundaries of these regions, 
ABA and ACA, projected on the PT plane, are the curves AB and 



76 



THERMODYNAMICS 



AC of Fig. 9-3. It is clearer from Fig. 9-4, why C is the critical 
point. The line AAA corresponds to the triple point A of Fig. 9-3 
The SPT surface is also divided into the various phase regions. 
Figure 9-5 shows a part of this surface, projected on the ST plane 
the surface being ruled with the dashed lines of constant pressure. 



v ' / ' 


• 


1 N. /q, vapor 


• 


s 
s 
s 


! A 




/ 




^*\^ / 


/ 
/ 
/ 

/ 
/ / 


vapor-solid 


vapor-liquid 


F gas 




1 >7 


/ 




1 ^ / ^ / 


/ 






/ ,*<>'' 














A 


sol-liq ■ i 


i B 




^c---"— "" 


solid 



FIG. 9-5. Entropy as a function of T (dashed curves) for 

various values of P, for the material of Fig. 9-3 



We see that, as pressure is kept constant and T is increased, S in- 
creases steadily until a phase change occurs, when S takes a sudden 
jump, of amount L/T, and then continues its steady increase with 
temperature in the new phase. Entropy change is largest between solid 
and vapor, not because the latent heat is so much larger for this phase 



change, but because it occurs at low temperature and S< 



Le/T s , where 



T s is small. 



>g 



The GPT surface for the Gibbs function, projected on the GT 
plane, is shown in Fig. 9-6. The dashed lines correspond to the inter- 
sections of the surface with planes of constant P, parallel to the GT 



CHANGES OF PHASE 



77 




FIG. 9-6. Gibbs function versus temperature (dashed lines) 
for various values of P, for the material of Fig. 
9-3. 



plane. The solid lines correspond to the phase changes. As noted ear- 
lier in this chapter, G does not change suddenly during a phase change, 
as do V and S; only the slopes (9G/3P) T = V and (8G/3T) V - -S (the 
slopes of the dashed lines of Fig. 9-6) change dis continuously across 
the phase -change boundaries. By taking gradients the curves of Figs. 
9-4 and 9-5 can be obtained from Fig. 9-6 or, vice versa, the curves 
of Fig. 9-6 can be obtained by integration of the data on curves in 
Figs. 9-4 and 9-5. 




Chemical 
Reactions 



The Gibbs function also is of importance in describing chemical 
processes. Since most chemical reactions take place at constant tem- 
perature and pressure, the reaction must go in the direction of de- 
creasing G. 

Chemical Equations 

A chemical reaction is usually described by an equation, such as 
2H 2 + 2 — ■ 2H 2 0, which we can generalize as 

-Z>iMi^£>J M J 
i J 

stating that a certain number, -i^, of molecules of the initial reac- 
tants Mi will combine to produce a certain number, v* 9 of molecules 
of final products Mj. Of course the chemical reaction can run in either 
direction, depending on the circumstances; we have to pick a direction 
to call positive. This is arbitrarily chosen to be the direction in which 
the reaction generates heat. We then, also arbitrarily, place all the 
terms in the equation on the right, so that it reads 

o = y>i M i (lo-i) 

i 

In this form the ^'s which are negative represent initial reactants and 
those which are positive represent final products {v for the H 2 in the 
example would be -2, v for the 2 would be -1, and v for the H 2 would 
be +2). The number v\ is called the stoichiometric coefficient for M^ 
in the reaction. 

For a chemical reaction to take place, more than one sort of mate- 
rial must be present either initially or finally, and the numbers n^ of 
moles of the reacting materials will change during the reaction. Re- 
ferring to Eqs. (8-11) and (8-17) we see that the Gibbs function and its 
change during the reaction are 

78 



CHEMICAL REACTIONS 79 



G=E n iMi; dG = -S dT + VdP+ J) i^i dn i (10~ 2 ) 

i i 

where ix\ is the chemical potential of the i-th component of the reac- 
tion, which we see is the Gibbs function per mole of the material M^ 
We shall see shortly how it depends on T and P and how it can be 
measured. 

A chemical reaction of the sort described by Eq. (10-1) produces a 
change in the n's in an interrelated way. If dn x = p 1 dx moles of ma- 
terial M x appear during a given interval of time while the reaction 
progresses, then dni = V{ dx moles of material Mi will appear during 
the same interval of time (or will disappear, if v^ is negative). For 
example, if 2dx moles of H 2 Q appear, simultaneously 2dx moles of 
H 2 and dx moles of 2 will disappear (i.e., dnjj = -2dx and dn = -dx) 
In other words, during a chemical reaction at constant T and P, the 
change in the Gibbs function is 



dG=£ jut^idx 



i 



In accordance with the discussion of Eq. (8-11) we should expect the 
reaction at constant T and P to continue spontaneously, with conse- 
quent decrease of G, until G becomes minimum at equilibrium. Thus, 
at equilibrium, at constant T and P, dG/dx = 0, or 

2 Mi^i = for equilibrium (10-3) 

i 

At equilibrium the relative proportions of the reactants and the prod- 
ucts must adjust themselves so that the /i's, which are functions of T, 
P, and the concentration of the i-th component, satisfy Eq. (10-3). 

Heat Evolved by the Reaction 

During the progress of the reaction, if carried out at constant T 
and P, any evolution of heat would be measured as a change in en- 
thalpy since, as we have already remarked [see the discussion after 
Eq. (8-7)] , enthalpy is the heat content of the system at constant pres- 
sure. But since, from Eq. (8-17), 

H = G + TS = G- TOG/9T) (10-4) 

the change in H as the parameter x changes at constant T and P is 

dH = (f h - x WHI)] pn * = t s ^ - T (l^'i] <* 



80 THERMODYNAMICS 

At equilibrium £ jii^i = and the reaction ceases. If we measure the 
heat evolved per change dx when the system is close to equilibrium, 
the rate of evolution of heat becomes 

dH/dx=-T^2>i I 'i) n (10-5) 

Thus by measuring the rates of change with temperature, (3^i/3T)p n , 
of the chemical potentials of the substances involved, we can predict 
the amount of heat evolved during the reaction. Or, vice versa, if we 
can measure (or can compute, by quantum mechanics) the heat evolved 
when v^ moles of substance Mj appear or disappear, by dissociating 
into their constituent atoms or by reassociating the atoms into the 
product molecules, we can predict the rate of change of the ju's with 
temperature. 

Reactions in Gases 

To see how all this works out, we take the simple case of a mixture 
at high- enough temperature so that all the components are perfect 
gases. The number of moles of the i-th component is n^ and the total 
number of moles in the mixture is n = En^. The relative proportions 
of the different gases can be given in terms of their concentrations^ 
Xi = nj/n, so that ^Xi = 1> or they can be expressed in terms of their 
partial pressures Pi = XiP- Each mode of expression has its advan- 
tages. 

For example, we can compute the Gibbs function per mole i±\ of 
Mi, as a perfect gas, in terms of T and pj. Using the procedures of 
Chapter 8 [see Eqs. (8-21) for example] , we have, for a perfect gas 
for which C p (T) = C V (T) + nR, 

(aS/3T) p = (l/T)C p (T); (9S/9P) T = -OV/3T) p = -(nR/P) 

so 

T 
S = S + / C p (T)(dT/T) - nR ln(P/P ) 
T 
and, since 

(9G/8T) = -S and (3G/3P) T = V = nRT/P 

we have 

aG/ani = /ii = RT ln(Pi/P ) + gi (T) = RT[ln(P/P ) + In Xl ] + gi (T) 

(10-6) 



CHEMICAL REACTIONS 



81 



where 



gi (T) = gi (T )-s (T-T ) 



T T 

/ dT / c p (T)(dT/T) 



s being the entropy per mole of the i-th component at T = T and Cp 
being the specific heat, the heat capacity per mole at constant pres- 
sure of Mi in its gaseous form. 

The equation for chemical equilibrium (10-3) then takes on the form 



RT£>i 
i 



:-■(*) 



+ In Xi + 



gi 
RT 



] 



or 



v 

nxi 



1 = (Po/P)^ 1 



K(T); K(T) = exp [-£ ^igi(T)/RT] (10- 7) 



v\ 



where the sign n indicates the product of all the terms Xi for all 
values of i. 

The quantity K(T) is called the equilibrium constant and the equa- 
tion determining the x's at equilibrium is called the law of mass ac- 
tion. To see how this goes, we return to the reaction between H 2 and 
2 . Suppose initially n x moles of H 2 



and n 2 moles of 2 were present 



and suppose during the reaction x 
combined to form 2x moles of 
present would be (n x - 2x) + (n 2 



moles of O and 2x moles of H, 



H 2 0. 



The total number of moles then 



x) + 2x = n. + n 2 - x, so the con- 



centrations at equilibrium and the stoichiometric coefficients for the 
reaction are 



for 



H 2 : 



X x = [(n 1 -2x)/(n 1 + n 2 -x)] 



^=-2 



for 2 : x 2 = [(n 2 -x)/(n 1 + n 2 -x)] 



-1 



for H 2 0: x 3 = [2x/(n 1 + n 2 - x)] 



^3 = 2 



and the law of mass action becomes 



2x(n 1 + n 2 - x) 
( ni -2x) 2 (n 2 -x) 



© 



exp 



2 gl + g 2 -2g ; 
RT 



(|-)k(t) 



from which we can solve for x. Since this particular reaction is 
strongly exothermic, 2g x + g 2 is considerably larger than 2g 3 and the 
exponential K(T) is a very large quantity unless T is large. Conse- 
quently, for P « P and for moderate temperatures, x will be close 
to (l/2)n x or ng, whichever is smaller, i.e., the reaction will go al- 
most to completion, using up nearly all the constituent in shorter sup- 
ply. If there is a deficiency in hydrogen, for example, so that n 1 < 2IL,, 
then we set x = (l/2)n x - 6, where 6 is small, and 



82 



THERMODYNAMICS 



(26) " [D,-(l/2)nJ \P^ exp 



Xi~ 



26 



1 ^+(1/2)^' 



RT 



_ n 2 ~ (1/2)^ n t 

X2 "n 2 +(l/2) ni ' X 3" ^+(1/2)11, 



the system coming to equilibrium with a very small amount, 26 moles 
of H 2 left. This amount increases with decrease of pressure P and 
with increase of T. 

To see how the equilibrium constant K(T) changes with tempera- 
ture, we can utilize Eqs. (10-3), (10-5), and (10-6). We have 



_d_ 
dT 



In K = 



1 
RT 2 



E^igi- T X>iSi 



= ^[- T (^HJ^ 2 (i) TP (m 

where g^ = (dgj/dT). The rate of change of the equilibrium constant 
K(T) with temperature is thus (for gas reactions) proportional to the 
amount of heat evolved (5H/9x)pT per unit amount of reaction at 
equilibrium at constant T and P. This useful relationship is known 
as Van't Hoff's equation. 

Electrochemical Processes 

Some chemical reactions can occur spontaneously in solution, gen- 
erating heat, or they can be arranged to produce electrical energy in- 
stead of heat. For example , a solution of copper sulfate contains free 
copper ions in solution. The addition of a small amount, An moles, of 
metallic zinc in powder form will cause An moles of copper to appear 
in metallic form, the zinc going into solution, replacing the Cu ions. 
At the same time an amount WAn of heat is released. If the reaction 
takes place at constant pressure and temperature, Eq. (8-7) indicates 
that W An must equal the difference in enthalpy between the initial 
state, with Cu in solution, and the final state, with Zn in solution, 



Hjl - H 2 = WAn 



(10-9) 



However the same reaction can take place in a battery, having one 
electrode of copper in a CuS0 4 solution and the other electrode (the 
negative pole of the battery) of zinc, surrounded by a ZnS0 4 solution, 



CHEMICAL REACTIONS 83 



the two solutions being in contact electrically and thermally. If the 
battery now discharges an amount AC coulombs of charge through a 
resistor, or a motor to produce work, more Cu will be deposited on 
the Cu electrode and an equal number of moles, An, of Zn will leave 
the zinc electrode to go into solution. In this case the energy of the 
reaction goes into electromechanical work; for every charge AC dis- 
charged by the battery, 8 AC joules of work are produced, where 8 
is the equilibrium voltage difference between the battery electrodes. 
If the battery is kept at constant temperature and pressure during 
the quasistatic production of electrical work, Eq. (8-ll)(which in this 
case can be written dG = -S dT + V dP + 8 dC) shows that the work 
done equals the change in the Gibbs function caused by the reaction. 
In other words 8 AC = AG. But Eq. (10-4) shows the relationship be- 
tween enthalpy and Gibbs function, which in this case can be written 

G = H x - H 2 + T(3 AG/3T) p (10- 10) 

which, with Eq. (10-9), provides a relationship between the electrical 
properties of the battery and the thermal properties of the related 
chemical reaction. 

If the ions have a valency z (for Zn and Cu, z = 2) then a mole of 
ions possesses a charge z £F, where 5 is the Faraday constant 9.65 x 
10 7 coulombs per mole. Thus the charge AC is equal to zJFAn, where 
An is the number of moles of Zn that goes into solution (or the number 
of moles of Cu that is deposited). Combining Eqs. (10-9) and (10-10), 
we obtain the equation (using AG = zJ8 An and dividing by z^ An) 

8 = (W/zff) + TOs/9T) p (10-11) 

relating the emf of the cell and the change in emf with temperature to 
the heat W evolved in the corresponding chemical reaction; and, by 
use of Eq. (10-5), we have derived a means of obtaining empirical 
values of the chemical potentials /ij. Thus electrical measurements, 
which can be made quite accurately, can be used instead of thermal 
measurements to measure heats of reaction. Equation (10-11) is 
called the Gibbs -Helmholtz equation. 



IT 



KINETIC THEORY 




Probability and 

Distribution 

Functions 



We have now sketched out the main logical features of thermody- 
namics and have discussed a few of its applications. We could easily 
devote the rest of this text to other applications, covering all branches 
of science and engineering. But, as physicists, it is more appropriate 
for us to go on to investigate the connection between the thermal prop- 
erties of matter in bulk and the detailed properties of the atoms that 
constitute this matter. The connection, as we saw in Chapter 2, must 
be a statistical one and thus will be expressed in terms of probabili- 
ties. 

Probability 

A basic concept is hard to define, except circularly. Probability is 
in part subjective, a quantization of our expectation of the outcome of 
some event (or trial) and only measurable if the event or trial can be 
repeated several times. Suppose one of the possible outcomes of a 
trial is A. We say that the probability that the trial results in A is 
P(A) if we expect that, out of series of N similar trials, roughly 
NP(A) of them will result in A. We expect that the fraction of the 
trials which do result in A will approach P(A) as the number of trials 
increases. A standard example is the gambler's six-sided die; a 5 
doesn't come up regularly every sixth time the die is thrown, but if a 
5 comes up 23 times in 60 throws and also 65 times in the following 
240 throws, we begin seriously to doubt the symmetry of the die 
and/or the honesty of the thrower. 

If result A occurs for every trial, then P(A) = 1. If other events 
sometimes occur, such as event B, then the probability that A does 
not occur in a trial, 1 - P(A), is not zero. It may be possible that 
both A and B can occur in a single trial; the probability of this hap- 
pening is written P(AB). Simple logic will show that the probability of 
either A or B or both occurring in a trial is 



87 



88 KINETIC THEORY 

P(A + B) = P(A) + P(B) - P(AB) (11-1) 

Relationships between probabilities are often expressed in terms of 
the conditional probabilities P(A|B) that A occurs in a trial if B 
also occurs, and P(B|A) the probability that B occurs if A occurs 
as well. We can see that 

P(AB) = P(A|B) P(B) = P(B|A) P(A) = P(BA) (11-2) 

A simple example is in the dealing of well- shuffled cards. The 
chance of a heart being dealt is P(H) = 1/4, the chance that the card 
is a seven if it is a heart is P(7|H) = 1/13 and therefore the proba- 
bility that the card dealt is the seven of hearts is P(7H) = P(7|H) P(H) 
= (l/13)(l/4) = 1/52. 

If the probability of A occurring in a trial is not influenced by the 
simultaneous presence or absence of B, i.e., if P(A|B) - P(A), then 
A and B are said to be independent. When this is the case, 

P(A|B) = P(A); P(B|A) = P(B); P(AB) = P(A) P(B) 

[1 - P(A + B)] = [1 - P(A)][1 - P(B)] (11-3) 

Saying it in words, if A and B are independent, then the chance of 
both A and B occurring in a trial is the product P(A) P(B) of their 
separate probabilities of occurrence and the probability 1 - P(A + B) 
that neither A nor B occur is the product of their separate probabil- 
ities of nonoccurrence. In the example of the dealing of cards, since 
the chance P(7) of a seven being dealt is the same as the conditional 
probability P(7|H) that a seven is dealt if it is a heart, the occurrence 
of a seven is independent of the occurrence of a heart. Thus the prob- 
ability that the card dealt is a seven of hearts is P(7) P(H) = (l/13)(l/4) 
= 1/52, and the chance that it is neither a seven nor a heart is 
(12/13)(3/4) = 9/13. 

If the trial is such that, when A occurs B cannot occur and vice 
versa, then A and B are said to be exclusive and 

P(AB) = P(A|B) = P(B|A) = 

so that 

P(A + B) = P(A) + P(B) (11-4) 

The chance of either A or B occurring, when A and B are exclusive, 
is thus the sum of their separate probabilities of occurrence. For ex- 
ample, the result that a thrown die comes up a 5 is exclusive of its 






PROBABILITY AND DISTRIBUTION FUNCTIONS 89 

coming up a 1; therefore the chance of either 1 or 5 coming up is 
(1/6) + (1/6) =1/3. 

Probabilities are useful in discussing random events. The defini- 
tion of randomness is as roundabout as the definition of probability. 
The results of successive trials are randomly distributed if there is 
no pattern in the successive outcomes, if the only prediction we can 
make about the outcome of the next trial is to state the probabilities 
of the various outcomes. 

Binomial Distribution 

The enumeration of the probabilities of all the possible outcomes 
of an event is called a distribution function, or a probability distribu- 
tion. For example, suppose the ''event" consists of N independent 
and equivalent trials, such as the throwing of N dice (or the throwing 
of one die N times). If the probability of "success" in one trial is p 
(if "success" is for the die to come up a 1, for example, then p = 1/6), 
it is not difficult to show that the probability of exactly n successes in 
N trials (not distinguishing the order of failures and successes) is 

Pn(N) = n!(N N i n) , P n (l-p) N - n (H-5) 

All possible results of the event are included in the set of values of n 
from to N, so the set of probabilities P n (N) is a distribution func- 
tion; this particular one is called the binomial distribution. 

When the possible results of the event are denumerable, as they 
are for the binomial distribution, the individual probabilities can be 
written P n , where n is the integer labeling one of the possible re- 
sults. If the results are mutually exclusive, 

£P n =l (H-6) 

n 

where the sum is taken over the values of n corresponding to all pos- 
sible results. For example, for the binomial distribution, 

P n (N) =J nt(N-n)l p " (1 ~ p) N_ " = k + U -p)] N = 1 

Suppose the value of result n is x(n). The expected value <x> of 
an event is then the weighted average 

<x> = £x(n) P n (11-7) 

n 



90 KINETIC THEORY 

Any individual event would not necessarily result in this value, but we 
should expect that the average value of a series of similar events 
would tend to approach <x> . A measure of the variability of individ- 
ual events is the variance (o" x ) 2 ot x, the mean square of the differ- 
ence between the actual value of an event and its expected value, 

(a x ) 2 = J[x(n) - <x>] 2 P n = <x 2 > - <x> 2 (11-8) 

n 

The square root of the variance cr x is called the standard deviation of 
x for the particular distribution. 

Random Walk 

To make these generalities more specific let us consider the proc- 
ess called the random walk. We imagine a particle in irregular motion 
along a line; at the end of each period of time t it either has moved a 
distance 5 to the right or a distance 6 to the left of where it was at 
the beginning of the period. Suppose the direction of each successive 
"step" is independent of the direction of the previous one, that the 
probability that the step is in the positive direction is p and the prob- 
ability that it is in the negative direction is 1 - p. Then the probability 
that during N periods the particle has made n positive steps, and 
thus N-n negative steps, is the binomial probability P n (N) of Eq. 
(11-5). 

The net displacement after these N periods, x(n) = (2n-N)6, might 
be called the "value" of the N random steps of the particle. The ex- 
pected value of this displacement after N steps can be computed in 
terms of the expected value <n> of n, 



<n>=f;nP n (N) = p I ^ 1 k , (N _ N 1 ! . k) , pk(l-p) N - 1 - k 
n=0 k=0 



pN 



so 



<x(n)>= <(2n-N)6>= (2p-l)NS; k = n-l (11-9) 

When p = 1 (the particle always moves to the right) the expected dis- 
placement after N steps is N6; when p = (the particle always 
moves to the left) it is -N6; when p = 1/2 (the particle is equally 
likely to move right as left at each step) the expected displacement is 
zero. The variability of the actual result of any particular set of N 
steps can be obtained from <n> and <n 2 >, 

<n 2 >= <n(n- l)> + <n>= p 2 N(N- 1) + pN 



PROBABILITY AND DISTRIBUTION FUNCTIONS 91 

so that 

(<j x ) 2 = < [x(n) - N(2p - l)6] 2 > = < (2n - 2pN) 2 5 2 > 

= 4Np(l - p)6 2 (11-10) 

When p = 1 (all steps to right) or p =0 (all steps to left) the variance 
(a x ) 2 of the displacement is zero; when p = 1/2 the variance is great- 
est, being N6 2 (the standard deviation is then cr x = 6 VN). 

The Poisson Distribution 

When the outcome of an event has a continuous range of values, the 
sums discussed heretofore must be changed to integrals and we must 
talk about a probability density f(x), defined so that f(x) dx is the 
probability that the result lies between x and x + dx. The representa- 
tive example is that of dots distributed at random along a line. The 
line may be the distance traveled by a molecule of a gas and the dots 
may represent collisions with other molecules, or the line may be the 
time axis and the dots may be the instants of emission of a gamma 
ray from a piece of radioactive material; in any of these cases a dot 
is equally likely to be found in any element dx of length of the line. 
If the mean density of dots is (1/x) per unit length (A being thus a 
mean distance between dots) then the chance that a randomly chosen 
element dx of the line will contain a dot is (dx/A), independent of the 
position of dx. 

We can now compute the probability P (x/A) that no dot will be 
found in an interval of length x of the line, by setting up a differential 
equation for P . The probability P [(x + dx)/A] that no dots are in an 
interval of length x+dx is equal to the product of the probability 
P (x), that no dots are in length x, times the probability 1 - (dx/A), 
that no dot is in the additional length dx [see Eq. (11-3)]. Using 
Taylor's theorem we have 

■>.(^)-.(f)^-.(f)*='.g)['-(?) 

or 

£*6)-(iW!) »*- p og)=e-*A ui-ii) 

The probability that no dot is included in an interval of length x, 
placed anywhere along the line, thus decreases exponentially as x in- 
creases, being unity for x = (it is certain that no dot is included in 



92 KINETIC THEORY 

a zero- length interval), being 0.368 for x = A, the mean distance be- 
tween dots (i.e., a bit more than a third of the intervals between dots 
are larger than A) and dropping off rapidly to zero for x larger 
than A. 

The probability density for this distribution is the derivative of P ; 
it supplies the answer to the following question. We start at an arbi- 
trary point and move along the line: What is the probability that the 
first dot encountered lies between x and x + dx from the start? Equa- 
tion (11-3) indicates that this probability, f(x) dx, is equal to the prod- 
uct of the probability e~ x /^ that no dot is encountered in length x, 
times the probability dx/A that a dot is encountered in the next inter- 
val dx. Therefore 

f(x) = (1/A)e" x/A (11-12) 

is the probability density for encountering the first dot at x (i.e., f dx 
is the probability that the first dot is between x and x + dx). 

As with discrete distributions, one can compute expected values and 
variances for probability densities. For example, the expected dis- 
tance one goes, from an arbitrarily chosen point on the line, before a 
dot is encountered is 

OO 00 

<x> = /xf(x) dx= A /u e~ u du = A (11-13) 



for the randomly distributed dots. The variance of this distance is 

oo 

(o x ) 2 = f (x-A) 2 f(x)dx = A 2 (11-14) 



so the standard deviation of x, cr x = A, is as large as the mean value 
A of x, an indication of the variability of the interval sizes between 
the randomly placed dots. 

We can go on to ask what is the probability P 1 (x/A) of finding just 
one dot in an interval of length x of the line. This is obtained by us- 
ing Eq. (11-3) again to show that the probability of finding the first dot 
between y and y + dy from the beginning and no other dot between this 
and the end of the interval x is equal to the product f(y) dy P [(x-y)/A] 
Then we can use Eq. (11-4) to show that the probability P x (x/A) that 
the one dot is somewhere within the interval x is the integral 

Pl (x/A) = /(l/A)e- y/A dy e- (x " y)A = (x/A) e" x/A 



An extension of this argument quickly shows that the probability that 
there are exactly n dots in the interval x is 



PROBABILITY AND DISTRIBUTION FUNCTIONS 93 

P n (x/X) = /f(y) dy P n _ ^SZI) = &f e-A 

We have thus derived another discrete distribution function, called 
the Poisson distribution, the set of probabilities P n (x/x), that n dots, 
randomly placed along a line, occur in an interval of length x of this 
line. More generally, a sequence of events has a Poisson distribution 
if the outcome of an event is a positive integer and if the probability 
that the outcome is the integer n is 

oo 

P n (E) = (E n /n!)e" E ; J^ P n (E) = 1 (11-15) 

n = 

If E < 1, P (E) is larger than any other P n ; if E > 1, P n is maxi- 
mum for n ^E, tending toward zero for n larger or smaller than 
this. Quantity E is the expected value of n, for 

oo °o 

< n > = J^ nP n (E) = E and (a n ) 2 =^T (n - E) 2 P n (E) = E 
n = n = (n . 16) 

Arrivals of customers to a store during some interval of time T or 
the emission of particles from a radioactive source in a given inter- 
val of time— both of these have Poisson distributions. We shall en- 
counter distributions like these later on. 

The Normal Distribution 

In the limit of large values of N for the binomial distribution or of 
large values of E for the Poisson distribution, the variable n can be 
considered to be proportional to a continuous variable x, and the 
probability P n approaches, in both cases, the same probability den- 
sity F(x). This function F(x) has a single maximum at x = <x> , the 
expected value of x (which we shall write as X, for the time being, 
to save space). It drops off symmetrically on both sides of this maxi- 
mum, the width of the peak being proportional to the standard devia- 
tion a, as shown in Fig. 11-1. Function F has the simplest form 
compatible with these requirements, plus the additional requirement 
that the integrals representing expected values converge, 

F(x ~ X)= ^f e " (X_X)V2CT2; /F(x-X)dx=l 

— oo 
oo 

<x>= / xF(x-X) dx = X; <(x-X) 2 > = a 2 (11-17) 



94 



KINETIC THEORY 




FIG. 11-1. The normal distribution 



This is known as the normal, or Gaussian, distribution. It is typi- 
cal of the behavior of a system subject to a large number of small, 
independent random effects. As an example, we might take the limit- 
ing case of a symmetric random walk (p = 1/2), where the number N 
of steps is large but the size of each step is small, say 6 - cr/VN. 
Then, if a particular sequence of steps turns out to have had n posi- 
tive steps and N-n negative ones, the net displacement would be 
x = a (2n - N)/VN and the probability of such a displacement would be 

p (N) = N!(1/2) N 

nW [(1/2)N + (x/2a)VN]![(l/2)N - (x/2a)VNJ ! 

since n = (1/2)N + (x/2cr)VN. 

When N is very large, x tends to become practically a continuous 
variable and in the limit P n (N)dn-^ F dx, where F is the probability 
density 



F = lim 

N — oo 

= lim 

N— o 



^ p (N) 
2a nW 



(VN/2a)N!(l/2) N 



[(1/2 )N + (x/2a)VN]![(l/2)N- (x/2a)VN]l 



To evaluate this for large 
factorial function, 



N we use the asymptotic formula for the 



n! — V27fn n n e~ n : 



> 10 



(11-18) 



PROBABILITY AND DISTRIBUTION FUNCTIONS 



95 



which is called Stirling's formula. Using it for each of the three fac 
torials and rearranging factors, we obtain 



F = lim 



QyfZn Vl-(x 2 /a 2 N) V aVN 



1 + 



x 1 



— 

aVN/ 



(1/2)N - (x/2a)VN 



■(l/2)N + (x/2a)VN 



lim 



cjV^ 



o 2 N 



(l/2)N-l/2 



1 - 



aVN 



xv^/2a 



1 + 



aVN 



-xVN/2a 



By using the limiting definition of the exponential, 



lim (l + -) 
n— °° \ n/ 



(11-19) 



this expression reduces to that of Eq. (11-17), with X = 0. The termi- 
nal point of a random walk with a large number of small steps is dis- 
tributed normally, as is any other effect which is the result of a large 
number of independent, random components. A proof that the limiting 
form of the Poisson distribution also is normal constitutes one of the 
problems. The distribution of random errors in a series of measure- 
ments is usually normal, as is the distribution of shots at a target. In 
fact the normal distribution is well-nigh synonymous with the idea of 
randomness. 




Velocity 
Distributions 



Probability distributions are the connecting link between atomic 
characteristics and thermodynamic processes. We mentioned in Chap- 
ter 1 that each thermodynamic state of a system corresponded to any 
of a large number of microstates, macroscopically indistinguishable 
but microscopically different configurations of the system's atoms. If 
we had an assembly of thermodynamically equivalent systems, any 
one of the systems may be in any one of this large set of microstates; 
indeed each one of the systems will pass continuously from one micro- 
state to another of the set. All we can specify is the probability i\ of 
finding the system (any one of them) in microstate i. In fact a speci- 
fication of the distribution function fj, specifying the value of fj for 
each microstate possible to the system in the given macrostate, will 
serve to specify the thermodynamic state of each of the systems of 
the assembly. 

Momentum Distribution for a Gas 

In the case of a perfect gas of N point atoms, each atom is equally 
likely to be anywhere within the volume V occupied by the gas, but the 
distribution-in-velocity of the atoms is less uniformly spread out. 
What is needed is a probability density function, prescribing the prob- 
ability that an atom, chosen at random from those present, should have 
a specified momentum, both in magnitude and direction. We can vis- 
ualize this by imagining a three-dimensional momentum space, as 
shown in Fig. 12-1, wherein the momentum p of any atom can be 
given either in terms of its rectangular components p x , p y , p z or 
else in terms of its magnitude p and its spherical direction angles a 
and /3. The probability density f is then a function of p x , p y , p z or 
of p, a, /3 (or simply of the vector p) such that f(p) dV p is the prob- 
ability that an atom of gas will turn out to have a momentum vector p 
whose head lies within the volume element dV p in momentum space 
(i.e., whose x component is between p x and p x + dp x , y component 

96 



VELOCITY DISTRIBUTIONS 



97 




Fig. 12-1. Coordinates in momentum space. 

is between p y and p y + dp y , and z component is between p z and 
Pz + d Pz> where dV p = dp x dp y dp z = p 2 dp sin a dot d/3). 

We do not assume that any given atom keeps the same momentum 
forever; indeed its momentum changes suddenly, from time to time, 
as it collides with another atom or with the container walls. But we 
do assume, for the present, that these collisions are rather rare 
events and that if we should observe a particular atom the chances 
are preponderantly in favor of finding it between collisions, moving 
with a constant momentum, and that the probability that it has mo- 
mentum p is given by the distribution function f(p) which has just 
been defined. 

If the state of the gas is an equilibrium state we would expect that 
f would be independent of time; if the state is not an equilibrium one, 
f may depend on time as well as on p. By the basic definition of a 
probability density, we must have that 

oo oo oo 

///f(p)dVp= /dp x Jdp v Jd Pz f(p) 



277 77 oo 

= /d/3 /sin a da /f(p)p 2 dp = l 



(12-1) 



since it is certain that a given atom must have some value of momen- 
tum. The distribution function will enable us to calculate all the vari- 
ous average values characteristic of the particular thermodynamic 
state specified by our choice of f(p). For example, the mean kinetic 
energy of the gas molecule (between collisions, of course) is 

277 77 oo 

<K.E.> tran = /d/3 Jsina da /f(p)(p 2 /2m) p 2 dp (12-2) 



vwc3680 WAB Morse 



3P 



10 



■27-61 



98 KINETIC THEORY 

and the total energy of a gas of point atoms would be N< K.E. > , 
where N is the total number of atoms in the system [see Eq. (2-1)]. 

If the gas is moving as a whole, there will be a drift velocity V 
superimposed on the random motion, so that f(p) is larger in one di- 
rection of p than in the opposite direction, more atoms going in the 
positive x direction (for example) than in the negative x direction. 
In this case the components of the drift velocity are 

oo oo oo 

V x = <p x /m> = J(p x /m)dp x jdp Jdp z f(p) 

-oo _oo J _0O 

277 77 oo 

= J d/3 /sin a da J(p cos a/m) f(p) p 2 dp (12-3) 



and similarly for V v and V z . 

If the gas is in equilibrium and its container is at rest the drift ve- 
locity must of course be zero. In fact at equilibrium it should be just 
as likely to find an atom moving in one direction as in another. In 
other words, for a gas at equilibrium in a container at rest we should 
expect to find the distribution function independent of the direction an- 
gles a and /3 and dependent only on the magnitude p of the momen- 
tum. In this case V = and 



4tt / f(p) p 2 dp = 1; <K.E.>=— /f(p)p 4 dp (12-4) 



The Maxwell Distribution 

We now proceed to obtain, by a rather heuristic argument, the mo- 
mentum distribution of a gas of point atoms in equilibrium at tempera- 
ture T. A more "fundamental" derivation will be given later; at pres- 
ent under standability is more important than rigor. We have already 
seen that at equilibrium the distribution function should be a function 
of the magnitude p of the momentum, independent of the angles a and 
£. One additional fact can be brought to bear: Eq. (11-3) states that, 
if the magnitudes of the three components of the momentum are dis- 
tributed independently, then f(p) should equal the product of the prob- 
ability densities of each component separately, f(p) =F(p x ) ' F(py) • F(p z ), 
each of the factors being similar functions of the three components. 

Moreover, since the atomic motions are entirely at random, it 
would seem reasonable that the function F should have the form of 
the normal distribution, Eq. (11-17), which represents the effects of 
randomness. Thus we would expect that the equilibrium momentum 
distribution would be 

f (p) = F(p x )F(p y )F(p z ) = j^g; exp - p2 *~Py 2 ~ P| (12 .5) 

where a is the standard deviation of either of the momentum compo- 
nents from its zero mean value. This result strengthens our impres- 



VELOCITY DISTRIBUTIONS 99 



sion that we are on the right track, for the sum p^+ p^+ p| = p 2 is 
independent of the angles a and /3 , and thus f is a function of the 
magnitude of p and independent of a and /3. In order to have f(p) 
be, at the same time, the product of functions F of the individual 
components and also a function of p alone, f must have the exponen- 
tial form of Eq. (12-5) (or, at least, this is the simplest function which 
does so). 

To find the value of the variance cr 2 in terms of the temperature of 
the gas, we have recourse to the results of Chapters 2 and 3, in par- 
ticular of Eq. (3-2), relating the mean kinetic energy of trans lational 
motion of the atoms in a perfect gas with the temperature T. To com- 
pute mean values for the normal distribution we write down the fol- 
lowing integral formulas: 

1° -u 2 /a . 1 _ 
J e du = - v^ra 

o l 

T -u 2 /a 2n , 113 5 2n- 1 n /i0 c , 

J e ' u du=2^^2 , 2'2'" — 2~ a (12-6) 



u 2 /a 2n+l , _ 1 , n+1 



r -uya zn-t-i , i , i 
J e u du = - n ! a 



Therefore the mean value of the atomic kinetic energy is 

00 °° 

<K.E.> = 477 / (p 2 /2m)f(p)p 2 dp - f— J p 4 e~ p/2a dp 

o mo" i v2rf o 

= | (a 2 /m) 

which must equal (3/2)kT, according to Eq. (3-2). Therefore the vari- 
ance a 2 is equal to mkT, where m is the atomic mass, k is Boltz- 
mann's constant, and T is the thermodynamic temperature of the gas, 

Thus a rather heuristic argument has led us to the following mo- 
mentum distribution for the translational motion of atoms in a perfect 
gas at temperature T, 

f(p) = (27rmkT)- 3 /2 e -P 2 /2mkT (l2 _ 7) 

which is called the Maxwell distribution. It is often expressed in 
terms of velocity v = p/m instead of momentum. Experimentally we 
find that it corresponds closely to the velocity distribution of mole- 
cules in actual gases. The distribution is a simple one, being iso- 
tropic, with a maximum at p = and going to zero at p-~ °° . The in- 
tegral giving the fraction of particles that have speeds larger than v 
is 



100 KINETIC THEORY 



a f *t \ 2 j 2 -mv 2 /2kT /■ 

4ti J f (p)p 2 dp = -=- e J 

mv ^ o 



e 



2 -i 1/2 



mv 
2kT +U 



du 



^1.131^v72kTe- mv2/2kT 

when mv 2 > 2kT 



Thus about half of the atoms have speeds greater than V2kT/m, about 
1/25 of them have speeds greater than 2V2kT/m, and only one atom 
in about 2400 has a speed greater than 3V2kT/m. 

Mean Values 

The mean velocity of the atoms is, of course, zero, since the mo- 
mentum distribution is symmetric. The mean speed and the mean- 
squared speed are 



/ \ / / \ 47r f , ~P 2 /2mkT dp ,/8kT 

<v> = <p/m>= — J p 3 e i// jz ,-,v 3/2 = / 

tf/ m o (27rmkT) 3 / 2 V 7im 

<v 2 > = <p 2 /m 2 > = | (kT/m) (12-8) 

We note that the mean of the square of the speed is not exactly equal 
to the square of the mean speed (8/tt is not exactly equal to 3/2, al- 
though the difference is not large). The mean molecular kinetic en- 
ergy of translation is proportional to T; the mean molecular speed is 
proportional to VT. 

If the gas is a mixture of two kinds of molecules, one with mass m lt 
the other with mass m 2 , then each species of molecule will have its 
own distribution in velocity, the one with m l , instead of m in the ex- 
pression of Eq. (12-7), the other with m 2 instead of m. This is equiv- 
alent to saying that the mean kinetic energy of translational motion of 
each species is (3/2)kT, no matter what the molecular weight of each 
is, as long as the two kinds are in equilibrium at temperature T. In 
fact if a dust particle of mass M is floating in the gas, being in equi- 
librium with the molecules of the gas, it will be in continuous, irreg- 
ular motion (called Brownian motion), which is equivalent to the ther- 
mal motion of the molecules, so its mean kinetic energy of translation 
will also be (3/2)kT. Its mean-square speed, of course, will be less 
than the mean-square speed of a gas molecule, by a factor equal to the 
square root of the ratio of the mass of the molecule to the mass of the 
dust particle. 

Finally, we should check to make sure that a gas with molecules 
having a Maxwell distribution of momentum will have a pressure cor- 
responding to the perfect gas law of Eq. (3-1). Of those molecules 
which have an x component of momentum equal to p x , (N/V) dA (p x /m) 
of them will strike per second on an area dA, perpendicular to the x 
axis, N/V being the number per unit volume and (p x /m)dA being the 
volume of the prism, in front of dA, which contains all the molecules 



VELOCITY DISTRIBUTIONS 101 

that will strike dA in a second. Each such molecule, striking dA, 
would impart a momentum 2p x if dA were a part of the container 
wall, so that the average momentum given to dA per second, which is 
equal to the pressure times dA, is (see Fig. 2-1) 

oo oo °o 

P dA = (N/V) dA / dp z / d Py / (2p 2 x /m) f(p) dp x 

- oo - oo 



or 



* - i^w I e " p2x/2mkT (2p * /m) dp *i ^" (p2y+ P?z)/2mkT 

x dp y dp z 



- I V3^ i * e"Px/ 2mkT d Px = NkT/V = nRT/V 

(12-9) 

where integration is only for positive values of p x since only those 
with positive values of p x are going to hit the area dA in the next 
second; the ones with negative values have already hit. Thus a gas 
with a Maxwellian distribution of momentum obeys the perfect gas 
law. 

Collisions between Gas Molecules 

Most of the time a gas molecule is moving freely, unaffected by the 
presence of other molecules of the gas. Occasionally, of course, two 
molecules collide, bouncing off with changed velocities. Roughly speak- 
ing, if two molecules come within a certain distance R of each other 
their relative motion is affected and we say they have collided; if their 
centers are farther apart than R they are not affected. To each mole- 
cule, all other molecules behave like targets, each of area a c = 7rR 2 , 
perpendicular to the path of the molecule in question. If the path of 
this molecule's center of mass happens to intersect one of these tar- 
gets, a collision has occurred and the path changes direction. Since 
there are N/V molecules in a unit volume, then in a disk- like re- 
gion, unit area wide and dx thick, there are (N/V) dx molecules. 
Therefore, the fraction of the disk obstructed by targets is (Na c /V) dx 
and consequently the chance of the molecule in question having a col- 
lision while moving a distance dx is (Na c /V) dx. Target area cr c is 
called the collision cross section of the molecules. 

Thus a collision comes at random to a molecule as it moves 
around; the density (1/a) of their occurrence along the path of its mo- 
tion is Na c /V. Reference back to the discussion before and after Eqs. 
(11-11) and (11-12) indicates that if the chance of encountering a 
"dot" (i.e., a collision) is dx/A, A is the mean distance between col- 



102 KINETIC THEORY 

lisions (or dots) and the probability that the molecule travels a dis- 
tance x without colliding and then has its next collision in the next dx 
of travel is 

f(x) dx = (dx/A) e" x / A where A = (V/a c N) (12-10) 

The mean distance between collisions A is called the mean free path 
of the molecule. We see that it is inversely proportional to the den- 
sity (N/V) of molecules and also inversely proportional to the molec- 
ular cross section a c . This mean free path is usually considerably 
longer than the mean distance between molecules in a gas. For exam- 
ple, a c for 2 molecules is roughly 4x 10" 19 m 2 and N/V at stand- 
ard pressure and temperature (0° C and 1 atm) is approximately 2.5 
x 10 25 molecules per m 3 . The mean distance between molecules is 
then the reciprocal of the cube root of this, or approximately 3.5 
x 10" 9 m, whereas the mean free path is A= V/Na c en 10" 7 m, roughly 
30 times larger. The reason, of course, is that the collision radius R 
is about 4 x 10" 10 m, for 2 , about one- tenth of the mean distance be- 
tween molecules; thus only about 1/1000 of the volume is ' 'occupied" 
by molecules at standard conditions. The difference is even more 
marked at low pressures. At 10~ 7 atmosphere A is 1 meter in length, 
but there are still 2.5 x 10 18 molecules per m 3 at this pressure, hence 
the mean distance between molecules is roughly 0.7 x 10~ 6 m. Neither 
A nor (7 C is dependent on the velocity distribution of the molecules. 

We can also talk about a mean time r between collisions, although 
this quantity does depend on the molecular velocity distribution. For 
a molecule having a speed p/m it would take, on the average, mA/p 
seconds for it to go from one collision to the next. Therefore the 
mean free time r for a gas with a Maxwellian distribution of momen- 
tum is 



r - < m */ P > = 4,nu / p *-*/*»« ^^ = X J/J| 

(12-11) 

which is not exactly equal to either A/<v> or A/-/<v 2 > but it is not 
very different from either. The mean free time decreases with in- 
creasing temperature because an increase in temperature increases 
the mean molecular speed, whereas it does not change A. Since the 
mean speed of an oxygen molecule at standard conditions is about 
400 m/sec, the mean free time for these conditions is about 3x 10" 10 
sec; at 0°C and 10" 7 atm it is about (1/300) sec. 




The Maxwell- 

Boltzmann 

Distributions 



When the mean density of particles in a gas is not uniform through- 
out the gas, or when electric or gravitational forces act on the mole- 
cules, then the distribution function for the molecules depends on po- 
sition as well as momentum, and may also depend on time. In such 
cases we must talk about the probability f(r,p,t) dV r dV p that a mol- 
ecule is in the position volume element dV r = dx dy dz at the point 
x, y, z (denoted by the vector r) within the container, and has a mo- 
mentum in the momentum volume element dV p = dp x dpy dp z at p x , 
Py> Pz (denoted by the vector p) at time t. Because f is a probability 
density we must have /dV r JdV p f(r,p,t) = 1 , where the integration 
over dV r is over the interior of the container enclosing the gas and 
where the integration over dVp is usually over all magnitudes and di- 
rections of p. 

Phase Space 

To determine the dependence of f on r, p, and t we shall work out 
a differential equation which it must satisfy. The equation simply 
takes into account the interrelation between the force on a particle, 
its momentum, and its position in space. In addition to the time vari- 
able , f is a function of six coordinates, the three position coordinates 
and the three momentum coordinates. A point in this six- dimensional 
phase space represents both the position and momentum of the parti- 
cle. As the particle moves about in phase space, its momentum coor- 
dinates change in accordance with the force on the particle and its po- 
sition coordinates change in accordance with the momentum it has. 
Each molecule of the gas has its representative point in phase space; 
all the points move about as a swarm, their density at any time and 
any point in phase space being proportional to f(r,p,t). 

The point which is at (x,y,z,p x ,p y ,p z ) in phase space has a six- 
dimensional "velocity" which has components (x,y,z,p x ,py,p z ), where 
the dot represents the time differential. We note the fact that the 

103 



104 KINETIC THEORY 



' 'coordinates" are related to the "velocity" components in a rather 
crosswise manner, for the x,y,z part of the velocity, f = p/m, is 
proportional to the momentum part p of the phase-space coordinates, 
independent of r; and the momentum part of the velocity, p = F(r), is 
the particle's rate of change of momentum, which is equal to the force 
F on the particle, which is a function of r. Thus two points in phase 
space which have the same space components x,y,z but different mo- 
mentum coordinates p x ,Py>Pz have "velocities" with equal momen- 
tum components p x ,p v ,p z [because F(r) is the same for both points] 
but different space components x,y,z (since the p's differ for the two 
points). Vice versa, two points which have the same momentum co- 
ordinates but different space coordinates will have six- dimensional 
velocities with the same space components x,y,z but different momen- 
tum components p x ,p v ,p z . 

At the instant of collision, the two points in phase space, represent- 
ing the two colliding molecules, suddenly change their momentum co- 
ordinates. In other words, those two points in phase space disappear 
from their original positions and reappear with a new pair of momen- 
tum coordinates. Any text in hydrodynamics will show that the expres- 
sion (dp/dt) + div pv, where p is the fluid density and v its velocity, 
is equal to the net creation of fluid at the point x,y,z. Extension to 
phase space indicates that 



ay dz ap x Njr * ap y y 3p z 



at ' ' ' -■-' ~ p 



£ +div r (rf) + divjpf ) (13-1) 



measures the net appearance of points in a six-dimensional volume 
element at (x,y,z,p x ,p v ,p z ), the difference between collision-produced 
appearances and disappearances per second. Thus the expression 
(13-1) should equal a function Q(r,p,t) called the collision function, 
the form of which is determined by the nature of the molecular colli- 
sions. 

The Boltzmann Equation 

As stated two paragraphs ago, the "velocity" components x,y,z 
are independent of r and the components p x , py, p z are independent 
of the p's (but depend on r). Therefore Eq. (13-1) becomes 

af , .. ■ af • . 8f , . af .. af , . af , . af ^ 



7 



at ax J ay az ^ A ap x ^y ap v ^ ap 

or 



THE MAXWELL-BOLTZMANN DISTRIBUTION 105 

Of/at) + (p/m) • grad r f + F -grad p f = Q(r,p,t) (13-2) 

where F(r) is the force on a gas molecule when it is at position r. 
This equation, which is called the Boltzmann equation, can also be ob- 
tained by assuming that the swarm of points, representing the mole- 
cules, as it moves along in phase space, only changes its density be- 
cause of the net difference between appearances and disappearances 
produced by collisions. 

The density of the swarm at r,p,t is proportional to f(r,p,t). A 
time dt later this swarm is at r + (p/m) dt, p + F dt, t + dt, so we 
should have 



f(r + Jjj- dt, p + F dt, t + dt) - f (r, p, t) = Q dt 



the difference being the net gain of points in time dt. Expansion of the 
first f in a Taylor's series, subtraction and division by dt results in 
Eq. (13-2) again. 

For a gas in equilibrium, under the influence of no external forces, 
its molecules have a Maxwell distribution in momentum, f is independ- 
ent of t and of r. Since f must satisfy Eq. (13-2), we see that for this 
case Q must be zero for all values of r and p. In other words, at 
equilibrium, for every pair of molecules whose points are in a given 
region of phase space and which collide, thus disappearing from the 
region, there is another pair of molecules with momentum just after 
collision such that they appear in the region; for every molecule which 
loses a given momentum by collision, there is somewhere another mol- 
ecule which gains this momentum by collision. 

Now suppose the gas is under the influence of a conservative force, 
representable by a potential energy (p(r), such that F = -grad r <p. If 
the gas is in equilibrium, we would expect that the balance between 
loss and gain of points still held and that Q is zero everywhere. How- 
ever now the density of the gas may differ in different parts of the con- 
tainer, so f may be a function of r as well as p. At equilibrium, 
though, the dependence of f on p should still be Maxwellian, since the 
temperature is still uniform throughout the gas. In other words the 
distribution function should have the form f(r,p,t) = f r (r) • fp(p), where 
factor fp will have the Maxwellian form of Eq. (12-7). 

Therefore for a gas in equilibrium at temperature T in the pres- 
ence of a potential field <£>, the Boltzmann equation will be 

(p/m) • grad r f - (grad r (p) • (grad p f ) = 

Because f = f r f p and since f p is given by Eq. (12-7) we have grad p (f) 
= -(p/mkT)f r f p , so the equation further simplifies, 



106 KINETIC THEORY 

(p/m) • [grad r f r + (f r /kT) grad r 0]f p = 



or 



grad r f r = -(f r /kT)grad r (p or f r (r) = Be ^ kT 



Thus the distribution function for this case is 
f(r ' p) = (2™kT)V 2 exp I " w (£ + *) 



(13-3) 



(13-4) 



This is known as the Maxwell-Boltzmann distribution. Constant B 
must be adjusted so that the integral of f over all allowed regions of 
phase space is unity. The formula states that the probability of pres- 
ence of a molecule at a point r,p in phase space is determined by its 
total energy (p 2 /2m)+(p(r) = H(r,p); the larger H is, the smaller is 
the chance that a molecule is present; the smaller T is, the more 
pronounced is this probability difference between points where H dif- 
fers. 

A Simple Example 

Suppose a gas at equilibrium at temperature T is confined to two 
interconnected vessels, one of volume V 1? at zero potential energy, 
the other of volume V 2 at the lower potential energy (p= -y . The 
connecting tube between the two containers should allow free passage 
of the gas molecules, although its volume should be negligible com- 
pared to V\ or V 2 . The potential difference between the vessels may 
be gravitational, the vessels being at different elevations above sea 
level; or it may be due to an electric potential difference, if the mol- 
ecules possess electric charges. 

In this simple case the factor f r = Be~0/kT of the Maxwell-Boltz- 
mann distribution is simply B throughout volume V\ and Be^A T 
throughout V 2 . Since the integral of ff_ over the total volume must be 
unity, we must have B = [V l + V 2 e?AT]-i # The number of molecules 
per unit volume in the upper container V^ is B times the total num- 
ber N of molecules in the system; the density of molecules in the 
lower container V 2 is NBe?A T . Since the distribution in momentum 
is Maxwellian in both vessels, the pressure in each container is kT 
times the density of molecules there, 

NkT 
For V x : P x = ~ + - e y/kT 

mV 2 :P 2 ^ Vie ,^ T + v2 (13-5) 



THE MAXWELL-BOLTZMANN DISTRIBUTION 107 



At temperatures high enough so that kT 7$> y the exponentials in 
the denominators of these expressions are nearly unity and the pres- 
sures in the two vessels are both roughly equal to kT times the mean 
density N/(V 1 + V 2 ) of molecules in the system. If the temperature is 
less than (y/k), however, the pressures in the two vessels differ ap- 
preciably, being greater in the one at lower potential, V 2 . When kT 
<^C y practically all the gas is in this lower container. 

The mean energy of a point atom in the upper container is all ki- 
netic, and thus is (3/2)kT; the mean energy of a point atom in the 
lower vessel is (3/2)kT plus its potential energy there, -y . Conse- 
quently, if the molecules are point atoms, having only translational 
kinetic energy, the total energy of the system is the sum of the num- 
ber of molecules in each vessel times the respective mean energies, 



U = NBVJ |kTJ + NBV 2 e^ kT ( |kT 



3 NyV 2 

= | NkT -*-£ 13-6) 

2 V x e-" kT + V 2 

which changes from N[(3/2)kT - y) when kT <C y and all the gas is 

/3 V2> \ 

in V 2 to N(-kT - ' ) when kT ^> y and the density is practi- 

cally the same in both containers. 

To compute the entropy and other thermodynamic potentials we 
must first recognize that there are three independent variables, T, 
Vj, and V 2 . The appropriate partials of S can be obtained by the pro- 
cedures of Chapter 8, 



/ as \ = Cv = i. / au\ /_as \ = ( dPA 

V3t; ViVi - t ~t[btJ v ^ w TV2 "Ut; v 



etc. 



iV, 



Therefore, 



3 /T\ 3 /v,+V,e>/ kT 

2 Nk HtJ + l Nk + s o + Nk ln [-^r — 



(N ^ k 7 T T) (13-7) 

v ie ->/ kT + v 2 

3_, /T\ _ , /v i + V 2 e>/ kT 



IS TS - |NkT lnl^-l - NkT In. 



-© 



108 KINETIC THEORY 

where S and V are constants of integration. 

The heat capacity of the gas at constant V x and V 2 , 

_ 3 (Nr»V 1 V 8 A'P)e->^ kT 

C V lV2 = 2 Nk ( Vl e->AT + v 2 ) 2 

is (3/2)kN both for kT <C > and for kT » > , but for intermediate 
temperatures C v is larger than this. At low temperatures nearly all 
the molecules are in the lower vessel and additional heat merely 
speeds up the molecules; at temperatures near y/k the added heat 
must push more molecules into the upper vessel as well as speed them 
all up; at very high temperatures the density in the two containers is 
nearly equal and additional heat again serves merely to increase ki- 
netic energy. We also note that P x = -(dF/dVj), and similarly for P 2 , 
as required by Eqs. (8-8). 

A More General Distribution Function 

The form of the Maxwell- Bo ltzmann distribution suggests some 
generalizations. In Eq. (13-4) the expression in the exponent is the 
total energy of position and of motion of the center of mass of the 
molecule, and f itself is the probability density of position and mo- 
mentum of the center of mass. One obvious generalization is to put 
the total energy of the molecule in the exponent and to expect that the 
corresponding f is the probability density that each of the molecular 
coordinates and momenta have specified values. The position coordi- 
nates q need not be rectangular ones, they may be angles and radii, 
or other orthogonal curvilinear coordinates. We specify the nature of 
these coordinates in terms of their scale factors h, such that hj dq^ 
represents actual displacement in the q^ direction (as r d6 is dis- 
placement in the 6 direction). For rectangular coordinates h is unity; 
for curvilinear coordinates h may be a function of the q's. 

Suppose each molecule has v degrees of freedom; then it will need 
v coordinates q 15 q 2 , ... , q^ to specify its configuration and position 
in space. If the coordinates are mutually perpendicular and if hj is 
the scale factor for coordinate qj, the volume element for the q's is 
dVq = h x dq x h 2 dq 2 ••• h v dqj, and the kinetic energy of the molecule is 

, v 
<K.E.>=| £ m^fq?, q. = dq^dt (13-8) 

i = 1 

where m^ is the effective mass for the i-th coordinate (total mass or 
reduced mass or moment of inertia, as the case may be). 

Following the procedures of classical mechanics we define the mo- 
mentum P|, conjugate to q if as 



THE MAXWELL-BOLTZMANN DISTRIBUTION 109 



Pi = ^ (K.E.) = m.hie.j 

We now define the Hamiltonian function for the molecule as the total 
energy of the molecule, expressed in terms of the p's and q's, 

H(p,q)= £ o^:(PiAi) 2 + 0(q) (13-9) 

i = 1 Zmi 

where (p is the potential energy of the molecule, expressed in terms 
of the q's. The h's may also be functions of the q's, but the only de- 
pendence of H on the p's is via the squares of each p^, as written 
specifically in the sum. It can then be shown that the corresponding 
scale factors for the momentum coordinates, the other half of the 
2v -dimensional phase space for the molecule, are the reciprocals of 
the h's, so that the momentum volume element is (dp 1 /h 1 )(dp 2 /h 2 ) ••• 
Idpp/hj,) = dV p . 

As an example, consider a diatomic molecule, with one atom of 
mass m x at position x 1 ,y 1 ,z 1 and another of mass m 2 at X2,y 2 ,z 2 . We 
can use, instead of these coordinates, the three coordinates of the cen- 
ter of mass, x = [(m^ + m 2 x 2 )/(m 1 + m 2 )] and similarly for y and z, 
plus the distance r between the two atoms and the spherical angles 9 
and cp giving the direction of r. Then the total kinetic energy of the 
molecule, expressed in terms of the velocities, is 

± (m,+ m 2 )(x 2 + y 2 + z 2 ) + - - 2 (r 2 + r 2 9 2 + r 2 sin 2 9 <p 2 ) 

z m » < m 2 

so that the volume element dVq = dx dy dz dr r d6 r sin 6 d<p. The 
momenta are 

Px = ( m i + m 2^ etc -> P r = m r r p^ = m r r 2 ^' 

p^= m r r 2 sin 2 9<p 

where m r is the reduced mass [m 1 m 2 /(m 1 + m 2 )]. The kinetic energy 
expressed in terms of the p's is 



2(m/ + m 2 ) t p x + p y + p z ] + 2^ 



Pr 
r L 



and the volume element dV p = dp x dp y dp z dp r (dp^/rJCdp^/r sin 9). 
For a molecule with v degrees of freedom, the distribution function 



is 



110 KINETIC THEORY 



f(q,P) - ry ry e 

^q ^p 


"\f )*!// "- 


where 


Zq = / ... /e -^)AT dVq 


Z P = J '" / ex p 


f. p!A!" 

>-' 2m; kT 

L i = 1 i J 



(13-10) 



dV D = (27TkT) (1/2)j 'Vm 1 m 2 ---m i , 



and where f(q,p) dV q dV p is the probability that the first molecular 
coordinate is between q 1 and q x + dq 1? the second is between q 2 and 
q 2 + dq 2 , and so on, that the first momentum coordinate lies between 
p x and p x + dp 1? and so on. Since the scale factors h^ are not func- 
tions of the p's, they enter as simple constants in the integration over 
the p's and thus the normalizing constant Zp can be written out ex- 
plicitly. We can also compute explicitly the mean total kinetic energy 
of the molecule, no matter what its position or orientation: 



<K.E.> 



Pi 



total = ^ /- J e ^ AT dv q 5p S "' /?^lq 

v pi a! 

! -4 2mTkT j dV P 



fa^);/"v/& «*(-! 



x dui du 2 ••• dui, = |kT (13-11) 

Mean Energy per Degree of Freedom 

Therefore each degree of freedom of the molecule has a mean ki- 
netic energy (l/2)kT, no matter whether the corresponding coordinate 
is an angle or a distance and no matter what the magnitude of the mass 
m^ happens to be. The thermal energy of motion of the molecule is 
equally distributed among its degrees of freedom. The mean value of 
the potential energy of course depends on the nature of the potential 
function (p(q), although a comparison with the kinetic-energy terms 
indicates that if the sole dependence of (p on coordinate qj is through 
a quadratic term (l/2)m i co?q| then the mean potential energy for this 

coordinate is also (l/2)kT (see below). 

This brings us to an anomaly, the resolution of which will have to 
await our discussion of statistical mechanics. A diatomic molecule 



THE MAXWELL-BOLTZMANN DISTRIBUTION 111 

has six degrees of freedom (three for the center of mass, one for the 
interatomic distance, and two angles for the direction of r, as given 
above) even if we do not count the electrons as separate particles. 
We should therefore expect the total kinetic energy of a diatomic gas 
to be the number of molecules N times six times (l/2)kT and thus U, 
the internal energy of the gas, to be at least 3NkT (it should be more, 
for there must be a potential energy, dependent on r, to hold the mol- 
ecule together, and this should add something to U). However, meas- 
urements of heat capacity C v = (8U/8T) V show that just above its boil- 
ing point (about 30° K) the U of H 2 is more nearly (3/2)NkT, that be- 
tween about 100° K and about 500° K the U of H 2 , O z , and many other 
diatomic gases is roughly (5/2)NkT and that only well above 1000° K 
is the U for most diatomic molecules equal to the expected value of 
(6/2)NkT (unless the molecules have dissociated by then). The rea- 
sons for this discrepancy can only be explained in terms of quantum 
theory, as will be shown in Chapter 22. 



A Simple Crystal Model 

Hitherto we have tacitly assumed that the coordinates q of the 
molecules, as used in Eq. (13-9), are universal coordinates, referred 
to the same origin for all molecules. This need not be so; we can re- 
fer the coordinates of each molecule to its own origin, provided that 
the form of the Hamiltonian for each molecule is the same when ex- 
pressed in terms of these coordinates. For example, each atom in a 
crystal lattice is bound elastically to its own equilibrium position, 
each oscillating about its own origin. The motion of each atom will af- 
fect the motion of its neighbors, but if we neglect this coupling, we can 
consider each atom in the lattice to be a three-dimensional harmonic 
oscillator, each with the same three frequencies of oscillation (in 
Chapter 20 we shall consider the effects of the coupling, which we 
here neglect). In a cubic lattice all three directions will be equivalent, 
so that (without coupling) the Hamiltonian for the j-th atom is 

h j = 25, Hj + p 2 yj + e 2 zj > + \ ™* (*} + y) + -V (13 ' 12) 

where xj is the x displacement of the j-th atom from its own equili- 
brium position and p x j is the corresponding mxj. Thus expressed, H 
has the same form for any atom in the lattice. 

In this case we can redefine the Maxwell-Boltzmann probability 
density as follows: (l/ZqZ p ) exp (-Hj/kT) is the probability density 
that the j-th atom is displaced xj ,yj ,zj away from its own equilibrium 
position and has momentum components equal to Pxj>Pyj>Pzj- Tne nor- 
malizing constants are [using Eqs. (12-6)] 



112 KINETIC THEORY 

3/2 



Z q 



mu>V/2kT , I 3 _r2irkT 



j e -ma,x/ Z KT dx 



_ -~ 



mw 



Z D = (27rmkT) 3/2 



J P 



Therefore the probability density that any atom, chosen at random, has 
momentum p and is displaced a distance r from its position of equi- 
librium in the lattice is 



= / " 



>3 



f(r ' p) = to ex ? 



2 2 2 

p murr 



2mkT 2kT 



(13-13) 



for the simplified model of a crystal lattice we have been assuming. 

We can now use this result to compute the total energy U, kinetic 
and potential, of the crystal of N atoms, occupying a volume V, at 
temperature T. In the first place we note that even at T = the crys- 
tal has potential energy of over-all compression, when it is squeezed 
into a volume less than its equilibrium volume V . If the compressi- 
bility at absolute zero is k, this additional potential energy is 
(V - V ) 2 /2kV ; the potential energy increases whether the crystal is 
compressed or stretched. We should also notice that the natural fre- 
quencies of oscillation of the atoms also are affected by compression; 
u) is a function of V. 

The thermal energy of vibration of a typical atom in this crystal is 
given by the integral of Hf over all values of r and p. Again using 
Eqs. (12-6) we have 

/... /Hf dV p dV q = ( ^!L_ / p2e -P 2 / 2mkT dp 
J J P q mV277mkT -- 

, 3 2 /mo; 2 [° 2 -mco 2 x 2 /2kT . 

+ 2™ywi xe * 

= |kT + |kT = 3kT = 3(R/N )T (13-14) 

showing that the mean thermal kinetic energy per point particle is 
(3/2)kT whether the particle is bound or free and that the mean ther- 
mal potential energy of a three-dimensional harmonic oscillator is 
also (3/2)kT, independent of the value of co. 

Thus, for this simple model of a crystal, with N = nN , 

U = 3nRT +[(V-V ) 2 /2kV ]; C v = 3nR =T(aS/3T) v (13-15) 

One trouble with this simple model is that it does not provide us 
with enough to enable us to compute the entropy. We can obtain 
(3S/aT) v from Eq. (13-15) but we do not know the value of (3S/3V) T = 



THE MAXWELL-BOLTZMANN DISTRIBUTION 113 

(3P/3T) V [see Eq. (8-12)], unless we assume the equation of state 
(9-1) and set (3P/3T) V =0/k from it. But an even more basic defi- 
ciency is that it predicts that C v = 3nR for all values of T. As shown 
in Fig. 3-1 and discussed prior to Eq. (9-2), C v is equal to 3nR for 
actual crystals only at high temperatures; as T goes to zero C v goes 
to zero. We shall show in Chapter 20 that this behavior is a quantum- 
mechanical effect, related to the anomaly in the heat capacities of di- 
atomic molecules, mentioned in the paragraph after Eq. (13-11). 

Magnetization and Curie's Law 

Another example of the use of the generalized Maxwell- Bo ltzmann 
distribution is in connection with the magnetic equation of state of a 
paramagnetic substance. Such a material has permanent magnetic di- 
poles of moment [i connected to its constituent molecules or atoms. 
If a uniform magnetic field ($> = jj. 3C ( jU is not a moment, it is the 
permeability of vacuum) is applied to the material, each molecule will 
have an additional potential- energy term -ju(B cos 6 in its Hamiltonian. 
where 6 is the angle between the dipole and the direction of the field 
(and jll is the dipole 's moment). We note that the (B and 3C used here 
are those acting on the individual molecules, which differ from the 
fields outside the substance. 

The distribution function, giving the probability density that the 
molecule has a given momentum, position, and orientation, can be 
written 

, e ■' o , ,., ,„ x u(B cos 0/kT 

f = f P fqfe; ffl =(l/Z )e^ ' 

where fp is the momentum distribution, so normalized that the inte- 
gral of fp over all momenta is unity; fq is the position distribution 
for all position coordinates except 6, the orientation angle of the 
magnetic dipole (fq is also normalized to unity). Thus the factor 
Iq gives the probability distribution for orientation of the magnetic 
dipole; each dipole is most likely to be oriented along the field (6 = 0) 
and least likely to be pointed against the field (6 = 77). When kT ^> jll-cb 
the difference is not great; the thermal motion is so pronounced that 
the dipoles are oriented almost at random. When kT ^C,u.(B the differ- 
ence is quite pronounced. Nearly all the dipoles are lined up parallel 
to the magnetic field; the thermal motion is not large enough to knock 
many of them askew. 

The factor Iq can be normalized separately, 

r jii(B cos 6/kT „ An /2kT\ . . *x<B 
Z/q = / e ^ ' sin 6 d6 = [ sinh j-=- 

y \u(B/ kT 



114 



KINETIC THEORY 



and can be used separately to find the mean value of the component 
jul cos of the dipole moment along the magnetic field. This, times the 
number of dipoles per unit volume, is, of course, the magnetic polari- 
zation <? of the material, as defined in Chapter 3. And, since the mag- 
netization is an = jLi V(P, we have 



am 



^/(Hcos0)e 



/i(B cos 6/kT 



sin 6 d6 



(13-16) 



•0 ° 



= Nm/Xo 



= NjLLjLLo 



(2/x)cosh x - (2/x 2 ) sinh x 



jU(B 



(2/x) sinh x ' ^T 

f(NjLL {A 2 (B/3kT), kT>M(B 
InjUoM, kT«CM(B 



:-©)-(sj 



Thus we see that at low fields (or high temperatures) the dipoles tend 
only slightly to line up with the field and the magnetization 9H is pro- 
portional to (B = jLL 3C, but that at high magnetic intensities (or low 
temperatures) all the dipoles line up and the magnetization reaches 
its asymptotic value NMoM (i.e., it is saturated), as shown in Fig. 
13-1. Since the magnetic moment of an oxygen molecule is roughly 



kT//xB 



ffl II I I I [ 




i i i i 



J I I I I MM 



I I I I I Mil 



MINI 



o.i 



10 



fiB/kt 
FIG. 13-1. Magnetization curve for paramagnetic substances. 



THE MAXWELL-BOLTZMANN DISTRIBUTION 115 



3xl0" 23 mks units, /KB is no larger than 3xl0 -23 joules for <B = 1 
weber per m 2 (= 10,000 gauss, a quite intense field). Since kT 
= 3x 10" 23 joules for T - 2° K, the parameter x = jxce/kT is consid- 
erably less than unity for 2 (for example) for temperatures greater 
than 30° K and/or (B less than 1 weber/m 2 . In such cases, where 
x<l, the polarization (P is much smaller than 3C, so that the 3C act- 
ing on the molecule is not much different from the X outside the ma- 
terial. 

Thus for most temperatures and field strengths, for paramagnetic 
materials like 2 , x is very small and Curie's law 

9TC=nD3C/T; D = N MoM 2 /3k (13-17) 

is a good approximation for the magnetic equation of state. Kinetic 
theory has thus not only derived Curie's law [see Eq. (3-8)] and ob- 
tained a relation between the Curie constant D and the molecular 
characteristics of the material (such as p. and the basic constants N 
jllq, and k) but has also determined the limits beyond which Curie's 
law is no longer valid, and the equation of state which then holds. 

For example, for the paramagnetic perfect gas of Chapter 7, the 
more accurate equation for T dS and the adiabatic formula (7-14) is 



TdS=[| 



nR - ^ (x 2 csch 2 x - 1) dT - iSi. dP 



3nD . 2 _ U 2 _ <\\ -.™ nRT 
3nDT 



a 2 



and 



T 5/2 



' "c^lc (x ' csch2 x " X) d3C 



n 3D/Ra 2 



T . . [aw\ ' 3D3C ^ u (ax\ 



const. 
(13-18) 



where x = a3C/T and a - jLtpio/k. This reduces to Eq. (7-14) when x 
is small. 

This completes our discussion of paramagnetic materials. In the 
case of ferromagnetic materials, where the polarization (P is not 
small compared to H, the field acting on the individual dipole is con- 
siderably modified by the polarization of the nearby dipole; in fact the 
dipoles may tend to line up all by themselves. But it would go too far 
afield to discuss permanent magnetization. 




Transport 
Phenomena 



The Boltzmann equation (13-2) can also be used to calculate the 
progress of some spontaneous processes, such as the mixing of two 
gases by interdiffusion and the attenuation of turbulence in a gas by 
the action of viscosity. All these processes have to do with the trans- 
port of something, foreign molecules or electric charge or momentum, 
from one part of the gas to another; consequently they are called 
transport phenomena. Here, because the system is not in equilibrium, 
the collision term Q, which measures the net rate of entrance and 
exit of molecular points in phase space, is not zero. An exact calcu- 
lation of the dependence of Q on p and r is quite a difficult task, re- 
quiring a detailed knowledge of molecular behavior during a collision. 
Exact expressions for Q have been determined for only a few, simple 
cases. Luckily there is a relatively simple, approximate expression 
for Q, which will be good enough for the calculations of this chapter. 

An Approximate Collision Function 

The function Q in the Boltzmann equation represents the effects of 
collisions in bringing the gas into equilibrium. As we pointed out 
earlier, when the gas is in equilibrium Q is zero; as many molecules 
gain a given momentum per second by a collision as there are those 
that lose this momentum per second by a collision. And if the gas is 
close to equilibrium Q should be small. Suppose the solution of the 
Boltzmann equation for equilibrium conditions is fo(r,p) and the solu- 
tion for nonequilibrium conditions close to the equilibrium state is 
f(r,p). What we have just been saying, in regard to the behavior of Q 
near equilibrium, is that it should be proportional to f - f , when f is 
nearly equal to f . A glance at Eq. (13-2) indicates that the propor- 
tionality constant has the dimensions of the reciprocal of time and can 
be written as l/t c (r,p), where t c is called the relaxation time of the 
system, for reasons shortly to be apparent. We will thus assume that 
the Boltzmann equation, for conditions near equilibrium, can be written 

116 



TRANSPORT PHENOMENA 117 

(3f/9t) + (p/m) • grad r f + F • grad p f = [(f - f)/t c ] 

where fo is the distribution for the nearest equilibrium state, and where 
f o - f is small compared to either f or f . Therefore the left-hand side of 
this equation is also small, and it would not produce additional first-order 
error to substitute f for f on the left-hand side. Thus an approximate 
equation for near-equilibrium conditions is 

f -fo - t c [(3fo/3t) + (p/m) • grad r f + F ■ grad p f ] (14-1) 

Collisions between gas molecules are fairly drastic interruptions of the 
molecule's motion. After the collision the molecule's direction of motion, 
and also its speed, may be drastically altered. Roughly speaking, it is as 
though each collision caused the participating molecule to forget its pre- 
vious behavior and to start away as part of an equilibrium distribution of 
momentum; only later in its free time, before its next collision, does the 
nonequilibrium situation have a chance to reaffect its motion. 

For example, we may find a gas of uniform density having initially a 
distribution in momentum f^(p ) of its molecules which differs from the 
Maxwell distribution (12-7) ; there may be more fast particles in relation 
to slow ones than (12-7) requires, or there may be more going in the x 
direction than in the y, or some other asymmetry of momentum which is 
still uniform in density. Such a distribution f^ since it is not Maxwellian, 
is not an equilibrium distribution. However, since the lack of equilibrium 
is entirely in the "momentum coordinate" part of phase space, the distri- 
bution can return to equilibrium in one collision time; we would expect that 
at the next collision each molecule"would regain its place in a Maxwell dis- 
tribution, so to speak. The molecules do not all collide at once, the chance 
that a given molecule has not had its next collision, after a time t, is 

e~V T , where t is the mean free time between collisions [see Eqs. (11-11) 
and (12-11)]. Thus we would expect that our originally anisotropic distri- 
bution would "relax" from f i back to f with an exponential dependence 

on time of e _t / T (note that t is proportional to X, the molecular mean 
free path). 

But if f is independent of r and there is no force F acting, a solution 
of Eq. (14-1) which starts as f = f A at t = is 

f = f +(fi - fJe-Vtc (14-2) 

which has just the form we persuaded ourselves it should have, except 
that the relaxation time t is in the exponent, rather than the mean free 
time t= < mX/p> . We would thus expect that the relaxation time t c , en- 
tering Eq. (14-1) would be approximately equal to mA/p. Detailed calcula- 
tions for the few cases which can be carried out, plus indirect experimen- 
tal checks (described later in this chapter) indicate that it is not a bad ap- 
proximation to set t c = < mA/p> = r. This will be done in the rest of this 
chapter. 



118 KINETIC THEORY 

Electric Conductivity in a Gas 

Suppose that a certain number N^ of the molecules of a gas are 
ionized (N^ being small compared to the total number N of molecules) 
and suppose that initially the gas is at equilibrium at temperature T. 
At t = a uniform electric field 8, in the positive x direction, is 
turned on. The ions will then experience a force es in the x direc- 
tion, where e is the ionic charge. Imposed on the random motion of 
the ions between collisions will be a "drift" in the x direction. This 
is not an equilibrium situation, since the drift velocity of the ions will 
heat up the gas. But if Ni/N is small, and if 8 is small enough, the 
heating will be slow and we can neglect the term 3f/3t in Eq. (14-1) 
in comparison with the other terms. 

Since the ions are initially uniformly distributed in space and 
since the ionic drift is slow, we can assume that f is more-or-less 
independent of r. Thus Eq. (14-1) for the ions becomes 

f - fo - t c F • grad p f = f - et c 8 (9f /8p x ) « [ 1 + et c 8 /mkT)p x J f 

(14-3) 
where f is the Maxwell distribution, 



3/2 



f - [l/(2 7rmkTr 2 ] exp 



(-Px-Py-Pz) 
2mkT 



of the neutral molecules. Function f will be a good approximation to 
the correct momentum distribution of the ions if the second term in 
the brackets of Eq. (14-3) is small compared to the first term, unity, 
over the range of values of p x for which f has any appreciable mag- 
nitude. The term et c Sp x /mkT can be written (eA8/kT)(p x < l/p>) 
if we assume that t c = r = < mA/p>, A being the mean free path of 
the molecule [see Eq. (12-11)]. Since eA8 is the energy that would 
be gained by the ion (in electron volts, if the ion is singly ionized) by 
falling through a mean free path in the direction of 8, and since kT 
in electron volts is T/7500, then for a gas (such as 2 ) at standard 
conditions, where A^ 10~ 7 m [see the discussion following Eq. (12-10)] 
and T-300, the factor eAE/kT^ 8/40,000, 8 being in volts per me- 
ter. Thus if 8 is as large as 4000 volts per meter, the second term 
will not equal the first in Eq. (14-3) until p x is 10 times the mean 
momentum <p> and by this time the exponential factor of f will 
equal about e~ 50 . Thus, for a wide range of values of T and of 8, 
either f is vanishingly small or else the second term in brackets of 
Eq. (14-3) is small compared to the first. 

What Eq. (14-3) indicates is that the momentum distribution of the 
ions, in the presence of the electric field, is slightly nonisotropic; 
somewhat more of them are going in the direction of the field (p x pos- 



TRANSPORT PHENOMENA 119 

itive) than are going in the opposite direction (p x negative). There is 
a net drift velocity of the ions in the x direction: 

r 7 /■ rrr/Px 2et c 8 Px \ 

U X = U I (p x /m)f dV p - U f (£ + ^ -|) f (p ) dV p 

«. !*£! M £H = Aeg Mf , _^i_ , MS (14-4) 

m m WmkT/ m<v> 

where we have used the fact that for a Maxwell distribution f , <p x > 
= and <p x /2m> = (l/2)kT and we have also used Eq. (12-11) for 
the mean free time t. 

We see that the drift velocity U of the ion is proportional to the 
electric intensity, as though the ion were moving through a viscous 
fluid. The proportionality factor M — et c /m ^Xe/<v> is called the 
mobility of the ion. The current density I = (N^ell/V) (in amperes per 
square meter) is 

I « (NiAe 2 /V)(2/77mkT) l/2 8 (14-5) 

obeying Ohm's law, with a conductivity NjeM/V = Nje 2 t c /mV. 

Drift Velocity 

It is interesting to see that the drift velocity, and therefore the cur- 
rent density, is proportional to the mean free time between collisions 
and is thus inversely proportional to the square root of the tempera- 
ture T. As T increases, the random velocity <v> of the ions (and 
neutral molecules) increases, but the drift velocity U of the ions de- 
creases. One can visualize the process by imagining the flight of an 
ion from one collision to its next. Just after each collision the ion 
comes away in a random direction, with no initial preference for the 
direction of s. But during its free flight the electric field acts on it, 
turning its motion more and more in the positive x direction (its path 
is a portion of a parabola) and thus adding more and more positive x 
component to its velocity. This accentuation of the positive x motion 
is completely destroyed by the next collision (on the average) and the 
molecule starts on a new parabolic path. If the mean free time is long, 
the molecule has plenty of time to add quite a bit of excess v x ; if t 
is small, the molecule hardly has time to get acted on by the field be- 
fore it collides again. Thus the higher the temperature, the greater 
the random velocity < v> , the shorter the mean free time t and the 
smaller the drift velocity and current density. This, of course, checks 
with the measurements of gaseous conduction. 

In most ionized gases, free electrons will be present as well as 
positive ions. The electrons will also have a drift velocity, mobility, 



120 KINETIC THEORY 

and current density, given by Eqs. (14-4) and (14-5), only with a 
negative value of charge e, a different value of A and a much 
smaller value of mass m. Thus the drift velocity will be oppo- 
site in direction to that of the ions but, since the charge e is neg- 
ative, the current density is in the same direction as that of the 
ions. Since the electronic mean free path is roughly 2 to 4 times 
that of the ions and since the electronic mass is several thousand 
times smaller, the electronic mobility is 500 to 1000 times greater 
than that of the positive ions and therefore most of the current in an 
ionized gas is carried by the electrons. 

Diffusion 

Another nonequilibrium situation is one in which different kinds of 
molecules mix by diffusion. To make the problem simple, suppose we 
have a small number Ni of radioactive "tagged" molecules in a gas 
of N nonradioactive molecules of the same kind. Suppose, at t = 0, 
the distribution in space of the tagged molecules is not uniform (al- 
though the density of the mixture is uniform). Thus the distribution 
function for the tagged molecules is a function of r, and we have to 
write our "0-th approximation" as 

— p 2 /2mkT 
f o= f r(r)-ip(p); h--J^^fT'> //.ArdVr = l 

(14-6) 

The distribution function f for the diffusing molecules will change 
with time, but we will find that the rate of diffusion is slow enough so 
that the term 9f/3t in Eq. (14-1) is negligible compared to other 
terms. 

In the case of diffusion there is no force F, but f does depend on 
r, so the approximate solution of Eq. (14-1) is 

f - f - t c (p/m) • grad r f = [f r - (t c /m)p • grad r f r ]f p (14-7) 

where the vector grad r f r points in the direction of increasing density 
of the tagged molecules. The anisotropy is again in the momentum 
distribution, but here the preponderance is opposite to grad r f r , there 
is a tendency of the tagged molecules to flow away from, the region of 
highest density. The conditional probability density that a molecule, if 
it is at point r, has a momentum p, is [see Eq. (11-2)] 

(f/f r ) -[1 - (t c /m)p-g]f p ; g = (l/f r )grad r f r 

From this we can compute the mean drift velocity of the tagged mole- 
cules which are at point r (for convenience we point the x axis in the 
direction of g): 



TRANSPORT PHENOMENA 121 

oo 

U - Iff (p/m)[l - (t c /m)p x g]f p dV p = -2(t c /m)g 

oo 

x J j /'( P y2m)f p dV - -(t c kT/m)g - -A(2kT/7rm) V2 g 

— OO " 

* -2A<v>(l/f r )grad r f r 

We see that in this case the drift velocity increases as T increases. 

The density p^ of tagged molecules at r is Nif r molecules per 
unit volume, so the flux J of tagged particles at r, the net diffusive 
flow caused by the uneven distribution of these particles, is 

J = Nif r U ^-Dgrad r Pi; D = t c kT/m « A (2kT/-nm) l/2 (14-8) 

where constant D is called the diffusion constant of the tagged mole- 
cules. A density gradient of tagged molecules produces a net flow 
away from the regions of high density, the magnitude of the flow being 
proportional to the diffusion constant D. We note that there is a sim- 
ple relationship between D and the mobility M of the same molecule 
when ionized and in an electric field, as given in Eq. (14-4), 

D = (kT/e)M (14-9) 

which is more accurate than our approximation for t c . 

Thus a measurement of diffusion in a gas enables us to predict the 
electrical conductivity of the gas or vice versa. Equation (14-8) is the 
basic equation governing diffusion. By adding to it the equation of con- 
tinuity, we obtain 

dpi/dt = -div J * DV 2 pi (14-10) 

which is called the diffusion equation. 

There are a number of other transport problems, heat flow and 
viscosity, for example, which can be worked out by use of the Boltz- 
mann equation. These will be given as problems. 




Fluctuations 



Any system in thermal equilibrium with its surroundings undergoes 
fluctuations in position and velocity because of the thermal motion of 
its own molecules as well as of any molecules that may surround it. 
Kinetic theory, which enables us to compute the mean thermal kinetic 
and potential energy inhering in each degree of freedom of the system, 
makes it possible to compute the variance (i.e., the mean-square am- 
plitude of the fluctuations) of each coordinate and momentum of the 
system. It is often useful to know the size of these variances, for they 
tell us the lower bound to the accuracy of a piece of measuring equip- 
ment and they sometimes give us a chance to measure, indirectly, the 
magnitude of some atomic constants, such as Avogadro's number N . 

Equipartition of Energy 

Referring to Eqs. (13-9) and (13-10), we see that if the Hamiltonian 
function for a system can be separated into a sum of terms [(l/2m^) 
x (pi/hj) 2 + 9i(qj)J, each of which is a function of just one pair of vari- 
ables, p^ and q^ , then the Maxwell- Boltzmann probability density can 
be separated into a product of factors, (1/Z^) exp[-(l/2mikT)(pi/hi) 2 
- (l/kT)(p^(qi)\ , each of which gives the distribution in momentum 
and position of one separate degree of freedom. Even if the potential 
energy, or the scale factors h, cannot be completely separated for 
all the degrees of freedom of the system, if the potential energy does 
not depend on some coordinates qj (such as the x coordinates of the 
center of mass of a dust particle floating freely in the air), then all 
values of that coordinate are equally likely (the dust particle can be 
anywhere in the gas) and its momentum will be distributed according 
to the probability density 

fpj(Pj> = (1/Zj)exp [- 2^jkT (f IT ; Z J = < 2,rm J kT > l/2 < 15 -V 



122 



FLUCTUATIONS 123 

The mean thermal kinetic energy of the j-th degree of freedom is 
thus 

OO 

<K.E.> = /(p|/2m j h])f pj (dpj/h j )=ikT (15-2) 

whether the coordinate is an angle or a distance or some other kind of 
curvilinear coordinate. Therefore the kinetic energy of thermal mo- 
tion is equally apportioned, on the average, over all separable degrees 
of freedom of the system, an energy (l/2)kT going to each. If the po- 
tential energy is independent of q-j, then (l/2)kT is the total mean en- 
ergy possessed by the j-th degree of freedom. On the average the en- 
ergy of rotation of a diatomic molecule (described in terms of two 
angles) would be kT and the average energy of translation of its cen- 
ter of mass would be (3/2)kT. A light atom (helium for example) will 
have a higher mean speed than does a heavy atom (xenon for example) 
at the same temperature, in order that the mean kinetic energy of the 
two be equal. In fact the mean- square value of the j-th velocity, when 
qj is a rectangular coordinate (i.e., when hj = 1), is 

<qj > = <Pj/ m j> = k T/mj (15-3) 

Mean- Square Velocity 

For example, the x component of the velocity of a dust particle in 
the air fluctuates irregularly, as the air molecules knock it about. The 
average value of x is zero (if the air has no gross motion) but the 
mean- square value of x is just kT divided by the mass of the parti- 
cle. We note that this mean-square value is independent of the pres- 
sure or density of the air the particle is floating in, and is thus inde- 
pendent of the number of molecules which hit it per second. If the gas 
is rarefied only a few molecules hit it per second and the value of x 
changes only a few times a second; if the gas is dense the collisions 
occur more often and the velocity changes more frequently per second 
(as shown in Fig. 15-1), but the mean- square value of the velocity is 
the same in both cases if the temperature is the same. 

Even if the potential energy does depend on the coordinate qj, the 
mean kinetic energy of the j-th degree of freedom is still (l/2)kT. If 
the potential energy can be separated into a term 0j(qj) and another 
term which is independent of qj, then the probability density that the 
jth coordinate has a value qj is 

fqjtaj) = (l/Z qj )e"^ /kT ; Z qj = /e~^ /kT hj d qj (15-4) 



124 



KINETIC THEORY 




Fig. 15-1. Variation with time of x component of velocity 
and displacement of Brownian motion. Lower 
curves for mean time between collisions five 
times that for upper curves. 



where the integration is over the allowed range of q-j. The mean value 

of the potential energy turns out to be a function of kT, but the nature 

of the function depends on how </>j varies with q-j. 

The usual case is the one where the scale constant is unity and 

where the potential energy <pi = (l/2)m. a^q^ has a quadratic depend- 

•* J J J 

ence on q-j, so that in the absence of thermal motion the displacement 

q^ executes simple harmonic motion with frequency uji/2it. In this 

case Z a i = (27ikT/m i co^ ) l/2 , and the mean value of potential energy 



J qj 



J W J 



when thermal fluctuations are present is 



<^> 



1 2 

2 m i w 



1 



/q]f qj dq rt kT 



(15-5) 



Thus when the potential energy per degree of freedom is a quadratic 
function of each q, the mean potential energy per q is (l/2)kT, inde- 
pendent of cdj, and equal to the mean kinetic energy, so that the total 
mean energy for this degree of freedom is kT. Also the mean-square 
displacement of the coordinate from its equilibrium position is 



<q]> 



kT/m jW j 



FLUCTUATIONS 125 



for coordinates with quadratic potentials. 

As an example of a case where <p, is not quadratic, we recall the 
case of the magnetic dipoles, where 0j = -/i® cos 6 , so that Z q j 
= (2kT//Lt(B) sinh (/i(B/kT). From Eq. (13-16) the mean potential energy 



is 



< - ii (B cos 6 > = kT - )Li(B coth (m (B/kT) ~~ 



|( M 2 (B 2 /kT), kT»^(B 



jll(B + kT, kT«C /i(B 



which is not equal to (l/2)kT. 

Fluctuations of Simple Systems 

A mass M, on the end of a spring of stiffness constant K, is con- 
stantly undergoing small, forced oscillations because of thermal fluc- 
tuations of the pressure of the gas surrounding it and also because of 
thermal fluctuations of the spring itself. In the absence of these fluc- 
tuations the mass will describe simple harmonic motion of amplitude 
A and frequency (1/2tt)(K/M) i/2 , so its displacement, and its mean 
kinetic and potential energy would be 

x = A cos[t(K/M) l/2 + a]; <K.E.> = ^KA 2 = <P.E.> 

where A and a are determined by the way the mass is started into 
motion. In the presence of the thermal fluctuations an irregular mo- 
tion is superposed on this steady- state oscillation. Even if there is 
no steady- state oscillation the mass will never be completely at rest 
but will exhibit a residual motion having total energy, potential plus 
kinetic, of kT, having a mean-square amplitude such that (1/2)KA 2 T 
= kT, or 

A 2 T = 2kT/K . 

With a mass of a few grams and a natural frequency of a few cycles 
per second (K> 1), this mean- square amplitude is very small, of the or- 
der of 10~ 2O m 2 , a root- mean- square amplitude of about 10" 8 cm. This is 
usually negligible, but in some cases it is of practical importance. The 
human eardrum, plus the bony structure coupling it to the inner ear, 
acts like a mass- spring system. Even when there is no noise present 
the system fluctuates with thermal motion having a mean amplitude of 
about 10" 8 cm. Sounds so faint that they drive the eardrum with less 
amplitude than this are "drowned out" by the thermal noise. In actual 
fact this thermal-noise motion of the eardrum sets the lower limit of 
audibility of sounds in the frequency range of greatest sensitivity of 



126 



KINETIC THEORY 



the ear (1000 to 3000 cps); if the incoming noise level is less than this 
"threshold of audibility/' we "hear" the thermal fluctuations of our 
eardrums rather than the outside noises. 

We notice that the root- mean- square amplitude of thermal motion 
of a mass on a spring, (2kT/K) l/2 , is independent of the density of the 
ambient air and thus independent of the number of molecular blows 
impinging on the mass per second. If the density is high the motion 
will be quite irregular because of the large number of blows per sec- 
ond; if the density is low the motion will be "smoother," but the mean- 
square amplitude will be the same if the temperature is the same, as 
illustrated in Fig. 15-2. Even if the mass-spring system is in a vac- 
uum the motion will still be present, caused by the thermal fluctua- 
tions of the atoms in the spring. 




Time t ^ 

Fig. 15-2. Brownian motion of a simple oscillator for two 
different mean times between collisions. 



The same effect is present in more-complex systems, each degree 
of freedom has mean kinetic energy (l/2)kT and similarly for the po- 
tential energy if it depends quadratically on the displacement, as in a 
simple oscillator. A string of mass p per unit length and length L 
under tension Q can oscillate in any one of its standing waves; the 
displacement from equilibrium and the total energy of vibration of the 
n-th wave is 



FLUCTUATIONS 127 

y n = A n sin(7inx/L) cos [(7rnt/L)(Q/p) l/2 + o n ] 

i y 

<K.E.> + <P.E.> = ^QA^vrn/L) 2 J sin 2 (7mx/L) dx 







= i(7T 2 n 2 Q/L)A 2 n 

When the string is at rest except for its thermal vibrations, each 
of the standing waves has a mean energy of kT, so the mean-square 
amplitude of motion A n of the n-th wave is (4LkT/7r 2 n 2 Q) and the mean- 
square amplitude of deflection of some central portion of the string is 
the sum of all the standing waves (because of the incoherence of the 
motion, we sum the squares), 

1 3N 3N 

<y 2 > * \ Y] < A n> = (LkT/Q) VJ (l/7r 2 n 2 ) - LkT/6Q 
n= 1 n= 1 

which is related to the result for the simple oscillator, 12Q/L being 
equivalent to the stiffness constant K of the simple spring. If the 
string is part of a string galvanometer these thermal fluctuations will 
mask the detection of any current that deflects the string by an amount 
less than (LkT/6Q) l/2 . 

Density Fluctuations in a Gas 

The thermal motion of the constituent molecules produces fluctua- 
tions of density, and thus of pressure, in a gas. We could analyze the 
fluctuation in terms of pressure waves in the gas, as we analyzed the 
motion of a string under tension in terms of standing waves. Instead 
of this, however, we shall work out the problem in terms of the poten- 
tial energy of compression of the gas. Suppose we consider that part 
of the gas which, in equilibrium, would occupy a volume V s and would 
contain N s molecules. At temperature T the gas, at equilibrium, 
would have a pressure P = N s kT/V s throughout. If the portion of the 
gas originally occupying volume V s were compressed into a somewhat 
smaller volume V s -AV(AV <^CV S ), an additional pressure AP 
= [N s kT/(V s - AV)] - P e* P(AV/V S ) would be needed and an amount of 
work 

AV 

/ AP d(AV) = |(P/V S )(AV) 2 = |N s kT(AV/V s ) 2 
o 

would be required to produce this compression. When thus compressed, 
this portion of the gas would have a density greater than the equilibrium 



128 KINETIC THEORY 

density p by an amount Ap = p (AV/V S ) and its pressure would be 
greater than P by an amount AP - P(AV/V S ). 

Therefore the potential energy corresponding to an increase of 
density of the part of the gas originally in volume V s , from its equi- 
librium density p to a nonequilibrium density p + Ap, is (l/2)NkT 
x (Ap/p) 2 . For thermal fluctuations the mean potential energy is 
(l/2)kT, if the potential energy is a quadratic function of the variable 
Ap(as it is here). Consequently the mean-square fractional fluctua- 
tion of density of a portion of the gas containing N s molecules (and 
occupying volume V s at equilibrium) is 

<(Ap/p) 2 > = l/N s (15-6) 

which is also equal to the mean- square fractional fluctuation of pres- 
sure, <(AP/P) 2 >. Another derivation of this formula is given in 
Chapter 23. 

We see that the smaller the fraction of the gas we look at (the 
smaller N s is) the greater the fractional fluctuation of density and 
pressure is caused by thermal motion. If we watch a small group of 
molecules, their thermal motion will produce relatively large changes 
in their density. On the other hand if we include a large number of 
molecules in our sample, the large fluctuations in each small part of 
the sample will to a great extent cancel out, leaving a mean- square 
fractional fluctuation of the whole which is smaller the larger the 
number N s of molecules in the sample. The root- mean- square frac- 
tional fluctuation of density or pressure of a portion of the gas is in- 
versely proportional to the square root of the number of molecules 
sampled. 

These fluctuations of density tend to scatter acoustical and electro- 
magnetic waves as they travel through the gas. Indeed it is the scat- 
tering of light by the thermal fluctuations of the atmosphere which pro- 
duces the blue of the sky. The fluctuations are independent of temper- 
ature, although at lower temperatures the N s molecules occupy a 
smaller volume and thus the fluctuations are more "fine-grained" at 
lower temperatures. 

Incidentally, we could attack the problem of density fluctuations by 
asking how many molecules happen to be in a given volume V s at some 
instant, instead of asking what volume N s molecules happen to occupy 
at a given instant, as we did here. The results will of course turn out 
the same, as will be shown in Chapter 23. 

Brownian Motion 

The fluctuating motion of a small particle in a fluid, caused by the 
thermal fluctuations of pressure on the particle, is called Brownian 
motion. We have already seen [in Eq. (15-3)] that the mean square of 



FLUCTUATIONS 129 

each of the velocity components of such motion is proportional to T 
and inversely proportional to the mass of the particle. The mean 
square of each of the position coordinates of the particle is not as 
simple to work out for the unbound particle as it was for the displace- 
ment of the mass on a spring, discussed in the preceding section. 

In the case of the mass on the end of the spring, the displacement 
x from equilibrium is confined by the restoring force, the maximum 
displacement is determined by the energy possessed by the oscillator, 
and we can measure a mean-square displacement from equilibrium, 
<x 2 >, by averaging the value of x 2 over any relatively long interval 
of time (the longer the interval, the more accurate the result, of 
course). But the x component of displacement of a free particle in a 
fluid is not so limited; the only forces acting on the particle (if we 
can neglect the force of gravity) are the fluctuations of pressure of 
the fluid, causing the Brownian motion, and the viscous force of the 
fluid, which tends to retard the particle's motion. If we measure the 
x component of the particle's position (setting the initial position at 
the origin) we shall find that, although the direction of motion often 
reverses, the particle tends to drift away from the origin as time goes 
on, and eventually it will traverse the whole volume of fluid, just as 
any molecule of the fluid does. If we allow enough time, the particle 
is eventually likely to be anywhere in the volume. This is in corre- 
spondence with the Maxwell- Bo ltzmann distribution; the potential en- 
ergy does not depend on x (neglecting gravity) so the probability den- 
sity f is independent of x; any value of x is equally likely in the end. 

Bat this was not the problem at present. We assumed that the par- 
ticle under observation started at x = 0; it certainly isn't likely to 
drift far from the origin right away. Of course the average value of x 
is zero, since the particle is as likely to drift to the left as to the 
right. But the expected value of x 2 must increase somehow with time. 
At t = the particle is certainly at the origin; as time goes on the 
particle may drift farther and farther away from x = 0, in either di- 
rection. We wish to compute the expected value of x 2 as a function of 
time or, better still, to find the probability density of finding the par- 
ticle at x after time t. Note that this probability is a conditional 
probability density; it is the probability of finding the x component of 
the position of the particle to be x at time t if the particle was at the 
origin at t = 0. 

Random Walk 

We can obtain a better insight into this problem if we consider the 
random-walk problem of Eqs. (11-9) and (11-10). A crude model of 
one-dimensional Brownian motion can be constructed as follows. Sup- 
pose a particle moves along a line with a constant speed v. At the end 
of each successive time interval i it may change its direction of mo- 



130 



KINETIC THEORY 



tion or not, the two alternatives being equally likely (p = 1/2) and dis- 
tributed at random. After N intervals (i.e., after time Nr) the chance 
that the particle is displaced an amount x n = (2n - N)vt from its initial 
position is then the chance that during n of the N intervals it went to 
the right and during the other (N- n) intervals it went to the left; it 
covered a distance vr in each interval. According to Eq. (11-5) this 
probability is 



N 



P n (N) = N!(l/2) i 7n!(N-n)! 



(15-7) 



since p = 1/2 for this case. 

The expected value of x n and its variance are then obtained from 
Eqs. (11-9) and (11-10) (for p = 1/2): 



N 



<x> 



L 

n = 



(2n-N)vrP n (N) = 



N 



<x 2 >= 2Z < 2n 

n = 



N) 2 (vt ) 2 P n (N) = N(vt f 



(15-8) 



The expected value of the displacement is zero, since the particle is 
as likely to go in one direction as in the other. Its tendency to stray 
from the origin is measured by <x 2 >, which increases linearly with 
the number of time intervals N, and thus increases linearly with time. 
If the particle, once started, continued always in the same direction 
(p = or 1) the value of <x 2 > would be (Nvr) 2 , increasing quadratic- 
ally with time But with the irregular, to- and- fro motion of the random 
walk, <x 2 > increases only linearly with time. Figure 15-3 is a plot 




Fig. 15-3. Displacement for random walk. At each dot the 
"walker" flipped a coin to decide whether to 
step forward or backward. 



FLUCTUATIONS 131 

of x as a function of t for a random walk as described here. We note 
the irregular character of the motion and the tendency to drift away 
from x = 0. Compare it with the curves of Fig. 15-1, and also with 
15-2, for a mass with restoring force. 

The limiting case of N very large and r very small is the case 
most nearly corresponding to Brownian motion. This limiting form 
was calculated at the end of Chapter 11. There we found that the prob- 
ability distribution for displacement of the particle after N steps re- 
duced, in the limit of N large, to the normal distribution of Eq. (11-17), 
The variance, in this case, as we just saw, is a 2 = (vr) 2 N (for p = 1/2) 
and the mean value of x is X = 0. Since the time required for the N 
steps is t = tN we can write the conditional probability density (that 
the particle will be at x at time t if it was at x = at t = 0) as 

F(x) = [l/(47TDt) l/2 ] e~ x2 /4Dt. D = | v 2 r (15-9) 

so that the probability that the particle is between x and x + dx at 
time t is F(x) dx. We see that the "spread" of the distribution in- 
creases as t increases, as the particle drifts away from its initial 
position. The mean- square value of x is 

<x 2 > = (J 2 = (vt) 2 N= 2Dt 

which increases linearly with time. Thus the question of the depend- 
ence of <x 2 > on time, raised earlier in this section, is answered to 
the effect that <x 2 > is proportional to t. The value of the propor- 
tionality constant 2D for the actual Brownian motion of a particle in 
a fluid must now be determined. 

The Langevin Equation 

To determine the value of constant D for a particle in a fluid we 
must study its equation of motion. As before, we study it in a single 
dimension first. The x component of the force on the particle can be 
separated into two parts, an average effect of the surrounding fluid 
plus a fluctuating part, caused by the pressure fluctuations of thermal 
motion of the fluid. The average effect of the fluid on the particle is a 
frictional force, caused by the fluid's viscosity. If the velocity of the 
particle in the x direction is x, this average frictional force has an 
x component equal to M/3x, opposing the particle's motion, where the 
mechanical resistance to motion, M/3, in a fluid of viscosity 77, on a 
spherical particle of radius a is M/3 = 67ra?] (Stoke' s law). The fluc- 
tuating component of the force on the particle can simply be written 
as MA(t) (we write these functions with a factor M, the mass of the 
particle, so that M can be divided out in the resulting equation). 

The equation of motion for the x component of the particle's posi- 
tion can thus be written as 



132 KINETIC THEORY 

Mx = -M/3x + MA(t) (15-10) 

which is known as Langevin' s equation. We note that /3 has the di- 
mensions of reciprocal time. Multiplying the equation by x/M and us- 
ing the identities 



i«w and xX= l5 



xx = 77 37 (x 2 ) and xx = - -^ (x 2 ) - (x) 2 



we have 



1 J?L (X 2) = oA 

2 dt 2 iX ; P dt 



^3TF(x 2 )= -/3^(x 2 )+(x) 2 + xA(t) 



This is an equation for one particular particle. If we had many iden- 
tical particles in the fluid (or if we performed a sequence of similar 
observations on one particle) each particle would have different values 
of x and x at the end of a given time t, because of the effects of the 
random force A(t). 

Suppose we average the effects of the fluctuations by averaging the 
terms of Eq. (15-10) over all similar particles. The term xA(t) will 
average out because both <x> and <A> are zero and the fluctuations 
of x and A are independent; the average value of x 2 , however, car- 
ries with it the mean effects of the fluctuating force A(t). We showed 
in Eq. (15-3) that for a particle in thermal equilibrium at temperature 
T, its mean-square velocity component <x 2 > is equal to kT/M. 

If we now symbolize the mean-square displacement <x 2 > as s(t), 
the average of the equation of motion written above turns out to be 

| s = (kT/M) - |/3s 

The solution of this equation is s = (2kT/M/3) - Ce"^. The transient 
exponential soon drops out, leaving for the steady-state solution 
s = (2kT/M/3) and thus 

s = <x 2 > = (2kT/M/3)t (15-11) 

This result answers the question raised at the end of the previous sec- 
tion; the constant D = (1/2)(v 2 t), used there now turns out to be 
kT/M/3 and, for a spherical particle of radius a in a fluid of viscosity 
77, constant D is equal to kT/67ra77 from Stoke 's law. 

The innocuous looking result shown in Eq. (15-11) enabled Perrin 
and others first to measure Avogadro's number N and thus, in a 
sense, first to make contact with atomic dimensions. They were able 
to measure N in terms of purely macroscopic constants, plus obser- 



FLUCTUATIONS 133 

vations of Brownian motion. A spherical particle was used, of known 
radius a, so that Stoke 's law applied. The viscosity of the fluid in 
which the particle was immersed was measured as well as the tem- 
perature T of the fluid. The value of the gas constant R was known 
but at the time neither the value of the Boltzmann constant k nor the 
value of N = R/k was known. The x coordinates of the particle in 
the fluid were measured at the ends of successive intervals of time of 
length t; Xq at t = 0, x x at t, x 2 at 2t, and so on, and the average of 
the set of values (x n+ j - x n ) 2 was computed, which is equivalent to 
the <x 2 > of Eq. (15-11). 

By making the measurements for several different values of the 
time interval t, it was verified that <x 2 > does indeed equal 2Dt, 
and the value of D was determined. The value of Avogadro's number, 

N = R/k = RT/67ra?7D 

can thus be computed. By this method a value of N was obtained 
which checks within about 5 parts in a thousand with values later ob- 
tained by more direct methods. Of course very small spheres had to 
be used, to make <x 2 > as large as possible, and careful observations 
with a microscope were made to determine the successive x n 's. 
Perrin used spheres of radius 2 x 10 -5 cm and time intervals from a 
few seconds to a minute or more. 

The Fokker- Planck Equations 

Brownian motion is simply the fine details of the process of diffu- 
sion. If there were initially a concentration of particles in one region 
of the fluid, as time went on these particles would diffuse, by Brown- 
ian motion, to all parts of the fluid. The diffusion constant for a par- 
ticle in a fluid is D = kT/M£, which is to be compared with Eq. 
(14-8) for the D for a molecule; in the molecular case /3 is evidently 
equal to l/t c , whereas for a larger, spherical particle j3= Qii3.ri/M. 
The mean concentration of the diffusing particles must satisfy a dif- 
fusion equation of the type given in Eq. (14-10). 

This means that there is a close connection between the results of 
this chapter and those of the section on diffusion. Whether the diffusing 
entity undergoing Brownian motion is a molecule of the gas or a dust 
particle in the gas, the probability density of its presence at the point 
in space given by the vector r at time t, if it starts at r at t = 0, is 
given by the three-dimensional generalization of Eq. (15-9), 

f r (r,t) = (47rDt)- 3/2 exp (- |F " r ° |2 ]; D = kT/M/3 (15-12) 

which is a solution of the diffusion equation (14-10). The value of /3 



134 KINETIC THEORY 



appropriate for the particle under study must be used, of course. 

The distribution function for diffusion by Brownian motion of Eq. 
(15-12) and the diffusion equation (14-10) that it satisfies can thus be 
derived by the methods of the previous chapter or else by those of 
this chapter. For example, it is possible to generalize the Langevin 
equation (15-10) and manipulate it to obtain the diffusion equation. 
Also, by either method, it can be shown that when an external force 
F acts on the diffusing particle (such as the force of gravity), the dif- 
fusion equation has the more general form 

9f r /9t = div [D grad f r - (F/M/3)f r ] (15-13) 

When f r is the density of diffusing substance (molecules or heat, 
for example), Eq. (15-13), or its simple version (14-10) for F = 0, is 
called the diffusion equation. When f r is the distribution function for 
a particle undergoing Brownian motion and the equation is considered 
to be a first approximation to a generalized Langevin equation, then 
Eq. (15-13) is called a Fokker -Planck equation. The solutions behave 
the same in either case, of course. The solution of (15-13) for F = 
and for the particle starting at r = r when t = is Eq. (15-12). From 
it can be derived all the characteristics of Brownian motion in regard 
to the possible position of the particle at time t. 

A Fokker- Planck equation can also be obtained for the distribution 
in momentum, f p (p,t) of the particle. It is 

Bfp/at = div p (MkT grad p f p + f p p) (15-14) 

For a particle that is started at t = with a momentum p = p , the 
solution of this equation, which is the probability density of the parti- 
cle in momentum space, is 

^-£t' 2 

f D (p,t) = [27iMkT(l - e" 2 ^)]- 3/2 exp 



P e 



2MkT(l-e" 2/3t ) 



(15-15) 



This interesting solution shows that the expected momentum of the 
particle at time t is p e~£t [compare with the discussion of Eq. 
(11-17)] , which is the momentum of a particle started with a momen- 
tum p and subjected to a frictional retarding force -pp. As time goes 
on, the effect of the fluctuations "spreads out" the distribution in mo- 
mentum; the variance of the momentum (i.e., its mean-square devia- 
tion from p e"~P*) is kT(l - e - 2/3t) ? starting as zero when t = 0, when 
we are certain that the particle's momentum is p , and approaching 
asymptotically the value kT, which is typical of the Maxwell distribu- 
tion. Thus Eq. (15-15) shows how an originally nonequilibrium distri- 
bution for a particle (or a molecule) in a fluid can change with time 
into the Maxwell distribution typical of an equilibrium state. Constant 



FLUCTUATIONS 135 

)3, which equals Qtislti/M for a spherical particle or l/t c for a mole- 
cule in a gas, is thus equal to the reciprocal of the relaxation time for 
the distribution, which relates directly to the discussion of Eq. (14-2). 

Of course the most general distribution function would be f(r,p,t), 
giving the particle's distribution in both position and momentum at 
time t after initial observation. The equation for this f is, not sur- 
prisingly, closely related to the Boltzmann equation (13-2). It can be 
shown to be 

|i + J6L . grad r f + F • grad p f = |9 div p [MkT grad p f + pf ] 

(15-16) 

The derivation of this equation, particularly of the right-hand side of 
it, involves a generalization of the arguments used in deriving Eq. 
(15-11). This right-hand side is another approximation to the collision 
term Q of Eq. (13-2). 



Ill 

STATISTICAL 
MECHANICS 




Ensembles 
and 

Distribution 
Functions 



It is now time to introduce the final generalization, to present a 
theoretical model that includes all the special cases we have been 
considering heretofore. If we had been expounding our subject with 
mathematical logic we would have started at this point, presenting 
first the most general assumptions and definitions as postulates, 
working out the special cases as theorems following from the 
postulates, and only at the end demonstrating that the predictions, 
implicit in the theorems, actually do correspond to the "real 
world," as measured by experiment. We have not followed this 
procedure, for several reasons. 

In the first place, most people find it easier to understand a new 
subject, particularly one as complex as statistical physics, by pro- 
gressing from the particular to the general, from the familiar to the 
abstract. 

A more important reason, however, is that physics itself has 
developed in a nonlogical way. Experiments first provide us with 
data on many particular cases, at first logically unconnected with 
each other, which have to be learned as a set of disparate facts. 
Then it is found that a group of these facts can be considered 
to be special cases of a "theory/ ' an assumed relationship be- 
tween defined quantities (energy, entropy, and the like) which 
will reproduce the experimental facts when the theory is appropri- 
ately specialized. This theory suggests more experiments, which 
may force modifications of the theory and may suggest further 
generalizations until, finally, someone shows that the whole sub- 
ject can be "understood' ' as the logical consequences of a few 
basic postulates. 

At this point the subject comes to resemble a branch of mathe- 
matics, with its postulates and its theorems logically deduced there- 
from. But the similarity is superficial, for in physics the experimental 
facts are basic and the theoretical structure is erected to make it 

139 



140 STATISTICAL MECHANICS 

easier to "understand" the facts and to suggest ways of obtaining new 
facts. A logically connected theory turns out to be more convenient to 
remember than are vast arrays of unconnected data. This convenience, 
however, should not persuade us to accord the theory more validity 
than should inhere in a mnemonic device. We must not expect, for ex- 
ample, that the postulates and definitions should somehow be "the 
most reasonable" or "the logically inevitable" ones. They have been 
chosen for the simple, utilitarian reason that a logical structure 
reared on them can be made to correspond to the experimental facts. 
Thus the presentation of a branch of physics in "logical" form tends 
to exaggerate the importance and inevitability of the theoretical as- 
sumptions, and to make us forget that the experimental data are the 
only truly stable parts of the whole. 

This danger, of ascribing a false sense of inevitability to the the- 
ory, is somewhat greater with statistical physics than with other 
branches of classical physics, because the connection between experi- 
ment and basic theory is here more indirect than usual. In classical 
mechanics the experimental verification of Newton's laws can be 
fairly direct; and the relationship between Faraday's and Ampere's 
experiments and Maxwell's equations of electromagnetic theory is 
clearcut. In thermodynamics, the experiments of Rumford, relating 
work and heat, bear a direct relationship to the first law, but the ex- 
perimental verification of the second law is indirect and negative. 
Furthermore, the more accurate "proofs" that the Maxwell-Boltz- 
mann distribution is valid for molecules in a gas, are experimentally 
circuitous. And finally, as we shall see later, there is no experiment, 
analogous to those of Faraday or Ampere, which directly verifies any 
of the basic assumptions of statistical mechanics; their validity must 
be proved piecemeal and infer enti ally. In the end, of course, the 
proofs are convincing from their very number and breadth of applica- 
tion. 

However we have now reached a point in our exposition where the 
basic theory must be presented, and it is necessary to follow the pat- 
tern of mathematical logic for a time. Our definitions and postulates 
are bound to sound arbitrary until we have completed the demonstra- 
tion that the theory does indeed correspond to a wide variety of ob- 
served facts. But we must keep in mind that they have been chosen 
solely to obtain this correspondence with observation, not because 
they "sound reasonable" or satisfy some philosophical "first prin- 
ciples." 

Distribution Functions in Phase Space 

In Chapters 13 and 15 we discussed distribution functions for mole- 
cules, and also for multimolecular particles in a gas. In statistical 



ENSEMBLES AND DISTRIBUTION FUNCTIONS 141 

mechanics we carry this generalization to its logical conclusion, and 
deal with distribution functions for complete thermodynamic systems. 
A particular microstate of such a system (a gas of N particles, for 
example) can be specified by choosing values for the 6N position and 
momentum coordinates; the distribution function is the probability 
density that the system has these coordinate values. Geometrically 
speaking, an elementary region in this 6N-dimensional phase space 
represents a microstate of the system; the point representing the sys- 
tem passes through all the microstates allowed by its thermodynamic 
state; the fraction of time spent by the system point in a particular 
region of phase space is proportional to the distribution function cor- 
responding to the thermodynamic state. In other words, a choice of a 
particular thermodynamic state (a macrostate) is equivalent to a 
choice of a particular distribution function, and vice versa. The task 
of statistical mechanics is to devise methods for finding distribution 
functions which correspond to specific macrostates. 

According to classical mechanics, the distribution function for a 
system with = 3N degrees of freedom is 

f(q,p) =f(qi,q 2 >..., q0,Pi,p 2 ,...,P0) 

where the q's are the coordinates and the p's the conjugate momenta 
[see Eq.(13-9)] which specify the configuration of the system as a 
whole. Then f(q,p) dVq dVp [where dVq = hidq 1 h2dq2***h^ ) dq0 and 
dVp = (dp 1 /hi)(dp2/h 2 )---(dp0/h ( ^)] is the probability that the system 
point is within the element of phase space dVq dVp at position qi,...,P0, 
at any arbitrarily chosen instant of time. 

More generally, the distribution function represents the probability 
density, not for one system, but for a collection of similar systems. 
Imagine a large number of identical systems, all in the same thermo- 
dynamic state but, of course, each of them in different possible micro- 
states. This collection of sample systems is called an ensemble of 
systems, the ensemble corresponding to the specified macrostate. 
Different ensembles, representing different thermodynamic states, 
have different populations of microstates. The distribution function 
for the macrostate measures the relative number of systems in the 
ensemble which are in a given microstate at any instant. Thus it is a 
generalization of the distribution function of Eq. (13-10), which was 
for an ensemble of molecules. 

Each system in the ensemble has coordinates q^ and mo- 
menta pi, and a Hamiltonian function H(q,p), which is the total energy 
of the system, expressed in terms of the q's and p's. The values of 
these q's and p's at a given instant determine the position of its 
system point in phase space. The motion of the system point in phase 
space is determined by the equations of motion of the system. These 



142 STATISTICAL MECHANICS 

can be expressed in many forms, each of which are discussed in books 
on classical dynamics. The form which is most useful for us at pres- 
ent is the set of Hamilton's equations, 

qi = 3H/api; pi= -BH/aqi, i = 1,2,. ..,0 (16-1) 

the first set relating the velocities qi to the momenta and the second 
set relating the force components to the rates of change of the mo- 
menta (the "velocity components" in momentum space). 

The ensemble of systems can thus be represented as a swarm of 
system points in phase space, each point moving in accordance with 
Eqs. (16-1); the velocity of the point in phase space is proportional 
to the gradient of H. The density of points in any region of phase 
space is proportional to the value of the distribution function f(q,p) 
in that region. 

Liouville's Theorem 

If the thermodynamic state is not an equilibrium state, f will be a 
function of time. If the state is an equilibrium state, the density of 
system points in any specified region of phase space will be constant; 
as many system points will enter the region per unit of time as will 
leave it. The swarm of system points has some similarity to the 
swarm of particles in a gas. The differences are important, how- 
ever. The system points are moving in a 20 -dimensional phase 
space, not real space; also the system points do not collide. In fact 
the system points do not interact at all, for each system point repre- 
sents a different system of the ensemble, and the separate systems 
cannot interact since they are but samples in an imaginary array of 
systems, assembled to represent a particular macrostate. Each in- 
dividual system point, for example, may represent a whole gas of N 
atoms, or a crystal lattice, depending on the situation the ensemble 
has been chosen to represent. There can be no physical interaction 
between the individual sample systems. 

This means that the Boltzmann equation for the change of f with 
time, the generalization of Eq. (13-2) to the ensemble, has no colli- 
sion term Q. The equation, 

% + L£p*> + L£pti-° (16 - 2) 

i = 1 i = 1 

is simply the equation of continuity in phase space, and represents 
the fact that, as the swarm of system points moves about in phase 
space, no system point either appears or disappears. 



ENSEMBLES AND DISTRIBUTION FUNCTIONS 143 

Since each system in the ensemble obeys Hamilton's equations 
(16-1), this equation of continuity becomes 





at 



Z_i L S( li\ a Pi/ a Pi\ 8c U/_ 



= 



and since 



8H df_ = _d_ L m \ _ f a 2 H 



api aqi 9qi\3Pi/ SQiSPi 

etc., we have 



i = 1 i = 1 

where df/dt is the change in the distribution function f in a coordi- 
nate system which moves with the system points. Because of the re- 
lationship between q and p, p and q, inherent in Hamilton's equa- 
tions, the density of system points near a given system point of the 
ensemble remains constant as the swarm moves about. If, at t = 0, 
the swarm has a high density in a localized region of phase space, 
this concentration of system points moves about as time goes on but 
it does not disperse; it keeps its original high density. This result is 
known as Liouville's theorem. 

We can use Liouville's theorem to devise distribution functions 
which are independent of time, i.e., which represent equilibrium 
macrostates. For example, if f had the same value everywhere in 
phase space, it would be independent of time; as a part of the swarm 
moved away from a given region of phase space a different part of 
the swarm would move in and, since all parts of the swarm have (and 
keep) the same density, the density in a given region would not change, 
We can also devise less trivial stationary distributions, for the path 
traversed by any one system point does not cover all of phase space; 
it confines itself to the hypersurface on which the Hamiltonian func- 
tion H(q,p) is constant; an isolated system cannot change its total 



144 STATISTICAL MECHANICS 

energy. Therefore if the distribution function is the same for all re- 
gions of phase space for which H is the same (i.e., if f is a function 
of H alone) the density of system points in a given region cannot 
change as the points move along their constant- H paths. 

We shall deal with several different types of distribution functions, 
corresponding to different specifications regarding the thermody- 
namic state of the system. The simplest one is for f to be zero 
everywhere in phase space except on the hypersurface corresponding 
to H(q,p) = E, a constant; the ensemble corresponding to this is called 
a microcanonical ensemble. A more useful case is for f to be pro- 
portional to exp[-H(q,p)/kT], corresponding to what is called the 
canonical ensemble. Other ensembles, with f's which are more- 
complicated functions of H, will also prove to be useful. But, in order 
for any of them to represent actual thermodynamic macrostates, we 
must assume a relationship between the distribution function f for 
an ensemble and the corresponding thermodynamic properties of the 
macrostate which the ensemble is supposed to represent. The ap- 
propriate relationship turns out to be between the entropy of the 
macrostate and the distribution function of the corresponding ensemble. 

Quantum States and Phase Space 

Before we state the basic postulate of statistical mechanics, re- 
lating entropy and ensembles, we should discuss the modifications 
which quantum theory makes in our definitions. In some respects the 
change is in the direction of simplification, the summation over de- 
numerable quantum numbers being substituted for integration over 
continuous coordinates in phase space. Instead of Hamilton's equa- 
tions (16-1), there is a Shrodinger equation for a wave function 
^(Qi 3 ci2,.-.,q(/)) and an allowed value E of energy of the system, both 
of which depend on the 6 quantum numbers v lf v 2 ,...,v§, which desig- 
nate the quantum state of the system. 

For example, if the system consists of N particles in the simple 
crystal model of Eq. (13-2), the classical Hamiltonian for the whole 
system can be written as 

(b 
H(q,p) = Z [l/2m)pf + (mw72)qf] (0 = 3N) (16-4) 

j =1 

where q 3i _ 2 = x i? q 3i _ 1 = y i5 q 3i = z it p 3i _ 2 = Pxi> etc - Hamil- 
ton's equations become 

qi - (l/m)pi; Pi = mw 2 qi (16-5) 

and Schrodinger's equation for the system is H^ = E\£, where each 



ENSEMBLES AND DISTRIBUTION FUNCTIONS 145 

Pi in the H is changed to (h/i)(8/3qi). For (16-4) it is 

i - 1 

where Planck's constant h is equal to 27rh. Solution of this equation, 
subject to the requirement that ^ is finite everywhere, results in the 
following allowed values of the energy E, 







E, . = Y[ hco^i+l] (16-7) 



The distribution function for an equilibrium state is a function of 
E^ij'-v^d) an< ^ ^ nus is a f unc ti° n of the <fi quantum numbers ^1,1^2,..., 
VAy, instead of being a function of H(q,p) and thus a function of the 20 
continuous variables qi,q2,...,q0>Pi>...>P0, as it was in classical 
mechanics. Function f^i,...,^) is the probability that the system is 
in the quantum state characterized by the quantum numbers Vi,...,V(p; 
as contrasted with the probability f(qi,...,P0) dVq dVp for phase 
space. These statements apply to any system, not just to the simple 
crystal model. The quantum state for any system with $ degrees of 
freedom, no matter what conservative forces its particles are sub- 
jected to, is characterized by quantum numbers ^...,^a. To 
simplify notation we shall often write the single symbol v instead of 
the cf) individual numbers Vi 9 .*.,V(h, just as we write q for qi,...,q0, 
etc. Thus i v is the probability that the system is in the quantum 

state v = vi f ...,V(j). The sum J^fv over all allowed states of the sys- 

^ v 

tern must be unity, of course. 

We thus have two alternative ways of expressing the microstates 
of the system, and thus of writing the distribution function. The quan- 
tum way, saying that each quantum state of the system is a separate 
and distinct microstate, is the correct way, but it sometimes leads to 
computational difficulties. The classical way, of representing a 
microstate as a region of phase space, is only an approximate repre- 
sentation, good for large energies; but when valid it is often easier to 
handle mathematically. The quantitative relationship between these ' 
two ways is obtained by working out the volume of phase space "oc- 
cupied" by one quantum state of the system. 

The connection between the classical coordinates q^ and momenta 
Pi and the quantum state is provided by the Heisenberg uncertainty 



146 STATISTICAL MECHANICS 



principle, Aq^ * Apj ^ h. A restatement of this is that, in the phase 
space of one degree of freedom, one quantum state occupies a "vol- 
ume" Aq i Ap i equal to h. For example, the one -dimensional harmonic 
oscillator has a Hamiltonian Hi = (l/2m)pi + (mu; 2 /2)qi. When in the 
quantum state v\, with energy hco(^ + 1/2) = (hw/27r)(i>i + 1/2), its 
phase -space orbit is an ellipse in phase space, with semiminor axis 
q m = [(h/7imco)(^i + 1/2)] 1/2 along q^ and semimajor axis p m = 
[(hma;/7r)(^ + 1/2)] V2 , which encloses an area 



Ad>i.) =7rp m q m =h^i + 2) 



The area between successive ellipses, for successive values of v^ is 
the area "occupied" by one quantum state. We see that A(i^ + 1) - 
A(i^) = h, as stated above. 

Thus a volume element dq^ dpi corresponds, on the average, to 
(dqi dpi/h) quantum states. Similarly, for the whole system, with 
degrees of freedom, the volume element dVq dVp = dq^.-dp^ will 
correspond, on the average, to (dVq dVp/h^) quantum states. Thus 
the correspondence between volume of phase space and number of 
quantum states is 

No. of microstates = (l/h0)(vol. of phase space) (16-8) 

when the system has degrees of freedom. 

When the volume of phase space occupied by the swarm of sys- 
tem points, representing a particular ensemble, is very large com- 
pared to h0, the classical representation, in terms of the continuous 
variables qi,...,?^ can be safely used. But when the volume occupied 
by the swarm is not large compared to h^, the classical representa- 
tion is not likely to be valid and the quantum representation is needed 
[see Eq. (19-8) et seq.] . 




Entropy 

and Ensembles 



As pointed out in the preceding chapter, we are presenting statis- 
tical mechanics in "logical" order, with definitions and basic postu- 
lates first, theorems and connections with experiment later. The last 
chapter was devoted to definitions. Each thermodynamic macrostate 
of a system may be visualized as an ensemble of systems in a variety 
of microstates, or may be represented quantitatively in terms of a 
distribution function, which is the probability i v that a system chosen 
from the ensemble is in the quantum state v = v ly v 2 ,...,v ( k or is the 
probability density f(q,p) that the system point has the coordinates 
( 1i> c 12>"'jP0 in Phase space, if the macrostate is such that classical 
mechanics is valid. In this chapter we shall introduce the essential 
postulates. 

Entropy and Information 

The basic postulate, relating the distribution function i v to the 
thermodynamic properties of the macrostate which the ensemble rep- 
resents, was first stated by Boltzmann and restated in more general 
form by Planck. In the form appropriate for our present discussion 
it relates the entropy S of the system to the distribution function i v 
by the equation 

S = -k£fy ln(f^); £fi/ = l (17-1) 

v v 

where k is the Boltzmann constant and where the summation is over 
all the quantum states present in the ensemble (i.e., for which i v 
differs from zero). 

This formula satisfies our earlier statements that S is a measure 
of the degree of disorder of the system [see discussion preceding 
Eq. (6-14)]. A system that is certainly in its single, lowest quantum 
state is one in perfect order, so its entropy should be zero. Such a 

147 



148 STATISTICAL MECHANICS 

system would have the i v for the lowest quantum state equal to unity 
and all the other f's would be zero. Since ln(l) = and x ln(x) — 
as x — 0, the sum on the right-hand side of Eq. (17-1) is zero for this 
case. On the other hand, a disorderly system would be likely to be in 
any of a number of different quantum states; the larger the number of 
states it might occupy the greater the disorder. If i v = 1/N for N 
different microstates (label them v = 1,2,...,N) and i v is zero for all 
other states then 

N 
S = -k S (1/N) ln(l/N) = k In N 
v = 1 

which increases as N increases. Thus Eq. (17-1) satisfies our pre- 
conceptions of the way entropy should behave. It also provides an op- 
portunity to be more exact in regard to the measurement of disorder. 
Disorder, in the sense we have been using it, implies a lack of in- 
formation regarding the exact state of the system. A disordered sys- 
tem is one about which we lack complete information. Equation (17-1) 
is the starting point for Shannon's development of information theory. 
It will be useful to sketch a part of this development, for it will cast 
further light on the meaning of entropy, as postulated in Eq. (17-1). 

Information Theory 

A gasoline gauge, with a pointer and scale, gives us more informa- 
tion about the state of the gasoline tank of an automobile than does a 
red light, which lights if the tank is nearly empty. How much more? 
Information comes to us in messages and to convey information each 
message must tell us something new, i.e., something not completely 
expected. Quantitatively, if there are N possible messages that could 
be received, and if the chance that the i-th one will be sent is fj, then 
the information I that would be gained if message i were received 
must be a function l(fi), which increases as 1/fi increases. The less 
likely the message, the greater the information conveyed if the mes- 
sage is sent. 

We can soon determine what function l(fj) must be, for we require 
that information be additive; if two messages are received and if the 
messages are independent, then the information gained should be the 
sum of the I's for each individual message. If the probability of 
message i be fj and that for j be fj then, if the two are independent, 
Eq. (11-3) requires that the probability that both messages happen to 
be sent is f ^f j . The additive requirement for information then re- 
quires that 

I(f if j ) = I(f i) +I(fj) 



ENTROPY AND ENSEMBLES 149 

and this, in turn, requires that function I be a logarithmic function 
of f, 

I(fi) = -Clnfi 

where C is a constant. This is the basic definition of information 
theory. Since ^ fi ^ 1, I is positive and increases as 1/fi increases, 
as required. 

The definition satisfies our preconceptions of how information be- 
haves. For example, if we receive a message that is completely ex- 
pected (i.e., its a priori probability is unity) we receive no informa- 
tion and I is zero. The less likely is the message (the smaller is fi) 
the greater the amount of information conveyed if it does get sent. 
The chance that the warning light of the gasoline gauge is off (showing 
that the tank is not nearly empty) is 0.9, say, so the information con- 
veyed by the fact that the light is not lit is a small amount, equal to 
-C In 0.9 - 0.1C. On the other hand if the gauge has a pointer and five 
marks, each of which represents an equally likely state of the tank, 
then the information conveyed by a glance at the gauge is C In 5 - 1.6C, 
roughly 16 times the information conveyed by the unlit warning light 
(the information conveyed by a lit warning light, however, is C In 10 - 
2.3C, a still larger amount). 

To see how these definitions relate to our discussion of information 
and disordered systems, let us return to an ensemble, corresponding 
to some thermodynamic state, with its distribution function ip. If we 
wish to find out exactly what microstate the system happens to be in 
at any instant, we would subject it to detailed measurement designed 
to tell us. The results of the measurement would be a kind of mes- 
sage to us, giving us information. If the measurements happened to 
show that the system is in microstate v, the information gained by 
the measurement would be -C lnf^, for i v is the probability that the 
system would happen to be in state v. We of course cannot make a 
detailed enough examination to determine exactly what microstate a 
complicated system should happen to be in, nor would we wish to do so 
even if we could. But we can use the expected amount of information 
we would obtain, if we made the examination, as a measure of our 
present lack of knowledge of the system, i.e., of the system's dis- 
order. 

The expected amount of information we would obtain if we did ex- 
amine the system in detail is the weighted mean of -C lnfj, over all 
quantum states v in the ensemble, the weighting factor being the 
probability i v of receiving the message that the system is in state 

v. This is the sum -C£fy lnfy = (C/k)S, according to Eq. (17-1). 

v 
Thus the entropy S is proportional to our lack of detailed informa- 
tion regarding the system when it is in the thermodynamic state 



150 STATISTICAL MECHANICS 

corresponding to the distribution function i v . Here again Eq. (17-1) 
corresponds to our preconceptions regarding the entropy S of a 
system. 

Entropy for Equilibrium States 

But we do not wish to use postulate (17-1) to compute the entropy 
of a thermodynamic state when we know its distribution function; we 
wish to use Eq. (17-1) to find the distribution function for specified 
thermodynamic states, particularly those for equilibrium states. In 
order to do this we utilize a form of the second law. We noted in our 
initial discussion of entropy [see Eq. (6-5)] that in an isolated system 
S tends to increase until, at equilibrium, it is as large as it can be, 
subject to the restrictions on the system. If the sum of Eq. (17-1) is 
to correspond to the entropy, defined by the second law, it too must 
be a maximum, subject to restrictions, for an equilibrium state. 
These requirements should serve to determine the form of the dis- 
tribution function, just as the wave equation, plus boundary condi- 
tions, determines the form of a vibrating string. 

To show how this works, suppose we at first impose no restric- 
tions on f^, except that Yj^p = 1 an< ^ that the number of microstates 

v 
in the ensemble represented by f v is finite (so that the quantum num- 
ber v can take on the values 1,2,...,W, where W is a finite integer). 
Then our problem is to determine the value of each i v so that 

W W 

S = -k Yj ^p Infy is maximum, subject to Yj ^p = 1 (17-2) 
p = l v=\ 

This is a fairly simple problem in the calculus of variations, which 
can be solved by the use of Lagrange multipliers. But to show how 
Lagrange multipliers work, we shall first solve the problem by 
straightforward calculus. 

The requirement that S^ = 1 means that only W-l of the f's 
can be varied independently. One of the f's, for example fw, depends 

W- 1 
on the others through the equation f^y = 1 - Yj *v • Now S is a 

p = 1 
symmetric function of all the f s, so we can write it S(f!,f 2 ,...,fw)> 
where we can substitute for f\v in terms of the others. In order that 
S be maximum we should have the partial derivative of S with re- 
spect to each independent f be zero. Taking into account the fact that 
f w depends on all the other f's, these equations become 



ENTROPY AND ENSEMBLES 



151 



(ds/du) + (afw/afi)(3S/afw) = tes/af x ) - (as/af w ) = o 

(as/ay - (8S/af w )=o (17 _ 3) 



(as/8f w _!) - (as/af w ) = o 

The values of the f's which satisfy these equations, plus the 
equation J^iv = l, are those for which Eq. (17-2) is satisfied. For 
these values of the f's, the partial derivative 8S/af\y will have some 
value; call it -a . Then we can write Eqs. (17-3) in the more sym- 
metric form 



©♦■•- K)*""° 



(*)♦-- 



However this is just the set of equations we would have obtained if, 
instead of the requirement that Sff^-'^fw) De maximum, subject to 
Yjiv = 1 of Eq. (17-2), we had instead used the requirement that 



W 

S(f 1 ,...,fw) + a o S iv be maximum, 

v = l 

W 
q determined so that 2 ^v ~ 1 



(17-4) 



Constant a is a Lagrange multiplier. 

Let us now solve the set of equations (17-4), inserting Eq. (17-1) 
for S. We set each of the partials of S + a Q J^i v equal to zero. For 
example, the partial with respect to U is 



= 



dk 



W 



W 



«o £_,**> - k Z-j tvhiip 



a -klnf K -k 



or 



t K = exp[(a /k) - 1] 



152 STATISTICAL MECHANICS 

The solution indicates that all the f's are equal, since neither a 
nor k depend on k. The determination of the value of a , and thus 
of the magnitude of i Vi comes from the requirement Yj^v ~ l'» *v ~ 
(1/W), so that 

W 

s = - k H (i) ln (w) =klnW (17 " 5) 

i/ = 1 

For a system restricted to a finite number W of microstates, and 
with no other restrictions, the state of maximum entropy is that for 
which the system is equally likely to be in any of the W microstates, 
and the corresponding maximal value of the entropy is k times the 
natural logarithm of the number W (which is sometimes called the 
statistical weight of the equilibrium macrostate). 

Application to a Perfect Gas 

To show how much is inherent in these abstract- sounding results, 
we apply them to a gas of N point particles, confined in a container 
of volume V. To say that the system is confined to a finite number of 
quantum states is equivalent, classically, to saying that the system 
point is confined to a finite volume in phase space. In fact, from Eq. 
(16-6), the volume of phase space fi, within which the system point is 
to be found, is fi = h^N^, where = 3N is the number of degrees of 
freedom of the gas of N particles. And, from Eq. (17-5) we see that 
the system point is equally likely to be anywhere within this volume 
Q. Thus, classically the analogue of Eq. (17-5) is 

f(q,p) = I/O; S = k ln(S2/h3N) (17-6) 

As long as the volume of the container V is considerably larger 
than atomic dimensions, Q is likely to be considerably larger than 
h3N, so the classical description, in terms of phase space, is valid 
[see the discussion following Eq. (16-6)]. 

The volume ft can be computed by integrating dVq dVp over the 
region allowed to the system.- Since each particle is confined to the 
volume V, the integration over the position coordinates is 

/— /dV q = /.../dx! d Yl dz x — dx N dy N dz N = V N (17-7) 

The integration of dVp will be discussed in the next chapter; here 
we shall simply write it as ftp. Therefore, 



ENTROPY AND ENSEMBLES 153 

$2 = V N ft p and S = Nk In V + k ln(ftp/h 3N ) (17-8) 

Comparison with Eq. (6-5) shows that the entropy of a perfect gas 
does indeed have a term Nk In V(Nk = nR). 

Moreover in this case of uniform distribution within £2, the mean 
energy of the gas, U = £fyEy, wi ^ De given by the integral 

u =i/.../dv q /.../ H dv p = i-f-fn dv p 



where 



N 



H = (l//2m) £ (Pxi + Pvi +Pzi) 



3=1 

is the total energy of the perfect gas. Thus U is a function of ftp, as 
well as of m and N and the shape of the volume in phase space, 
within which the ensemble is confined; we can emphasize this by 
writing it as U(ftp). Note that this is so only when H is independent 
of the q's. However, from Eq. (17-8) we have 

ftp = (h3/V)N e S/k so U = U(h3N V -N e S/k) 

Thus the formalism of Eq. (17-4) has enabled us to determine 
something about the dependence of the internal energy on V and S for 
a system with H independent of the q's. We do not know the exact 
form of the dependence, but we do know that it is via the product 
y-N eS/k^ From this one fact we can derive the equation of state for 
the perfect gas. We first refer to Eqs. (8-1). If the postulates (17-1) 
and (17-4) are to correspond to experiment, the partials of the function 
U(h3Ny-N e S/k) W ith respect to S and V must equal T and -P, re- 
spectively. But 

(§HS> SA (f£ P ) - 

Thus, if the first partial is to equal the thermodynamic temperature 
T and minus the second partial is the pressure P for any system 
with H dependent only on the momenta, the relationship between T, 
P, and V must be P = (kN/V)T = nRT/V, which is the equation of state 
of a perfect gas. Postulate (17-1) does indeed have ties with "reality " 




The 
Microcanonical 

Ensemble 



We postponed discussing exactly what volume of momentum space 
was represented by the quantity ft p of Eq. (17-8), because we did not 
need to discuss it at that point. Before we can evaluate ftp we must 
take into account the requirements of Liouville's theorem, men- 
tioned in the paragraphs following Eq. (16-3). There it was shown 
that if the distribution function f(q,p) had the same value for every 
point in phase space at which the energy of the system is the same 
(i.e., over a surface of constant energy), then f will be independent 
of time. Thus the finite volume of phase space occupied by the en- 
semble of Eq. (17-6) must be a region between two constant -energy 
surfaces, or else just the "area" of a single constant-energy sur- 
face. 

Example of a Simple Crystal 

The latter alternative, an ensemble of systems, all of which have 
the same energy, is one that should be investigated further. Since the 
"volume" of phase space occupied by such an ensemble is finite, the 
discussion of the previous section applies and for maximum entropy 
the distribution function should be constant over the entire constant- 
energy surface. The resulting ensemble, for which f(q,p) = unless 
H(q,p) = U (or f v = unless E^ = U, for the case where quantum 
theory is used), Eq. (17-6) becomes 

f = 1/ft (or 1/W) when H(q,p) = U (or when E^ = U) 

zero otherwise and S = k ln(ft/h</>) (or k InW) (18-1) 

and is called the microcanonical ensemble. Quantity ft is the "area" 
of the surface in phase space for which H(q,p) = U and W is the 
total number of quantum states that have allowed energies E v = U 
(and, when ft is large enough, ft - h$W). 

154 



THE MICROCANONICAL ENSEMBLE 155 

We shall work out the microcanonical distribution for two simple 
systems. The first is the simple crystal model of Eq. (13-12), with 
each of the N atoms in the crystal lattice being a three-dimensional 
harmonic oscillator with frequency co/277. Here we shall use quan- 
tum theory, since the formula for the allowed energies of a quantized 
oscillator is simple. The allowed energy for the i-th degree of free- 
dom is h(jo(vi + 1/2), where h = h/27r and i/j is the quantum number 
for the i-th degree of freedom. Therefore the allowed energy of vi- 
bration of this crystal is the sum 



E p =ha> Y] H +2^ ha;; 0=3N (18-2) 



and the total internal energy, including the potential energy of static 
compression [see Eq. (13-15)] is 

U = Ej, + [(V - V ) 2 /2kV ] =Ho)M +|nHo> + [(V - V ) 2 /2kV ] 

(18-3) 

where M = (v 1 + v 2 + ••• + v^) and = 3N. 

A microcanonical ensemble would consist of equal proportions of 
all the states for which M is a constant integer, and W is the num- 
ber of different permutations of the quantum numbers v\ whose sum 
is M. This number can be obtained by induction. When 0=1, there 
is only one state for which v 1 = M, so W = 1. When 0=2, there are 
M + 1 different states for which v 1 + v 2 = M, one for v 1 =0, v 2 = M, 
another for v L = 1, v 2 = M - 1, and so on to v 1 = M, v 2 = 0. When 
0=3, there are M + 1 different combinations of v 2 and v z when 
v x is 0, M different ones when v x = 1, and so on, so that 

W = (M + 1) + M + (M - 1) + — + 2 + 1 =|(M + 1)(M + 2) 

for 0=3 

Continuing as before, we soon see that for different i^s (i.e., 
degrees of freedom), 

(M+0 - 1)! _ (M + 3N- 1)! rift 4 x 

W M!(0 - 1)! " M!(3N- 1)! U5 "* ; 

From this and from Eq. (18-3) we can obtain W as a function of U; 



156 



STATISTICAL MECHANICS 



then from Eq. (18-1) we can obtain S as a function of U. Probability 
i v that the system has any one of the combinations of the v's which 
add up to M is 1/W. 

Since both M and W are large integers, we can use the asymptotic 
formula for the factorial function, 



(27in) 1/2 n n e- n , n»l 



(18-5) 



which is called Stirling's formula. Using it, we can obtain a simple 
approximation for the number W of different quantum states which 
have the same value of M, and thus of U: 



W- 



M + d> - 1 



27rM(2> 

6 



(M + d>- 1) M + Ci) - 1 M- M (0 - l)-0 +1 



2ttM(M + <b\ 



v*r(»^ 



(18-6) 



where, since & = 3N is large, we have substituted for 0-1. 
Therefore the entropy of the simple crystal is 



S = klnW^kMln(l +^l - 



*i.(i + M) 



(18-7) 



where we have neglected the logarithm of the square root, since it 
is so much smaller than the other two terms. 

Let us first consider the high-energy case, where the average 
oscillator is in a higher excited state (i/j ^> 1) and therefore where 
M ^> $ = 3N. In this case, we can neglect 1 compared to M/<p and 
we can use the approximation ln(l + x) - x, good for x <C 1, obtaining 

S « k6 + k& ln(M/d>) = k<b ln(eM/#) 



so that 



(18-8) 



U - 3^e S / 3kN +|Nhw + [(V - V ) 2 /2kV ], = 3N « 



M 



Remembering that T = (aU/8S) v , we can find T as a function of S 
and then U as a function of T and V, for this high-temperature case. 



T =fiw(aM/aS) v M (ho)/ek)e s / 3kN 



so that 



(18-9) 



U - 3NkT +|lSma) + [(V - V ) 2 /2kV o ; 



THE MICROCANONICAL ENSEMBLE 157 

This is to be compared with Eq. (13-15). There is a "zero-point' ' 
energy (3/2)Nhu> additional to this formula, but it is small compared 
to the first term at high temperatures (kT ^> hoj). 

At very low energies we have >> M, in which case we can write 



S^kM 



i / 3eN \ 

ln Ud 



so that 



1 = 1 (dS\ ^_k_ 



(3 



w- " 



or 



Jid)M 



_. i im , tut 1 / 3eN> \ * k 1 (™\ 

+ kM = kMln(— J or -=-m[— J 



or 



or 



M = 3Ne- Rw / kT 



U - 3Nhw /e-WkT + *\ + [ (v _ v o ) 2 /2/<V ] 



(18-10) 



Thus as T — (M — 0) the entropy and the heat capacity (dU/3T) v 
both go to zero, a major point of difference from the results of Chap- 
ter 13. We shall see later [the discussion of Eq. (19-11), for ex- 
ample] that this vanishing of S and of C v as T — is a character- 
istic of systems when classical physics is no longer a good approxi- 
mation, and the effects of quantization are apparent. 

Microcanonical Ensemble for a Perfect Gas 

In the case of a perfect gas of N point particles in a volume V of 
"normal" size, the energy levels are so closely spaced that we can 
use classical physics for temperatures greater than a fraction of a 
degree Kelvin. Thus a microcanonical ensemble for such a system is 
represented by a distribution function f(q,p) which is zero every- 
where in phase space except on the "surf ace,' ' 



158 STATISTICAL MECHANICS 

3N 
H = £ (l/2m)p? - U (a constant) (18-11) 

i = l 

where it is 1/ft, ft being the integral of dVq dVp over this surface 
(i.e., ft is the "area" of the surface). The entropy of the micro- 
canonical ensemble is then given by Eq. (18-1). This classical ap- 
proximation should be valid for T ^> 0.01 °K, as will be shown later 
[see the discussion following Eq. (21-5)]. 

Since the energy of a perfect gas is independent of the positions of 
the particles, the integral of dVq, as shown in Eq. (17-7), is simply 
the n-th power of the volume V of the container. The integral of dV p , 
however, is the "area" of the surface in momentum space defined by 
Eq. (18-11). This surface is the 3N-dimensional generalization of a 
spherical surface; the coordinates are the p's and the radius is 
R = (2mU) V2 , 

Pl 2 +p| + ... + p 2 ) = R 2 ; 0=3N; R 2 = 2mU 

Once the area ftp of this hyper spherical surface is computed, the rest 
of the calculation is easy, for the volume of phase space occupied is 
ft = V N ft p and f and S are given by Eq. (17-11). 

To find the area we need to define some hyperspherical coordi- 
nates. Working by induction: 

For two dimensions: 

Xi = R cos 1? x 2 = R sin 0i, x? + x| = R 2 
Element of length of circle: ds = R d0i 

For three dimensions: 

Xi = R cos0i, x 2 = R sin 0i cos0 2 , x 3 = R sin0i sin0 2 
Element of area of sphere: dA = R 2 sin0i d0 x d0 2 

For four dimensions: 

Xi = R cos 0i, x 2 = R sin 0i cos0 2 , x 3 = R sin0i sin0 2 cos0 3 , 
x 4 = R sin 0i sin0 2 sin0 3 
Element of surface: dA = R 3 sin 2 0! sin0 2 d0 x d0 2 d0 3 



THE MICROCANONICAL ENSEMBLE 159 

For $ dimensions: 

x x = R cos0i, x 2 = R sin0 x cos0 2 , ... (18-12) 

x^ _ i = R sin0i-"Sin00 _ 2 cos00 _ i, x^ = R sin0 x ••• sin00 _ \ 

dA = R^ " 1 sin^ ~ 2 0i sin0 ~ 3 2 ••• sin00 _ 2 d0i d0 2 — d0^ _ i 

where angle 00-1 goes from to In and angles 0i,...,00-2 go 
from to 77. To integrate this area element we need the formula 

/sin n 9 de = VF[(fn - |)j/(|n) l] (18-13) 

where m ! is the factorial function 

oo 

ml =/x m e- x dx = m • (m - 1)!; 0! = 1! =1 


(-i)l=VF=2-g)l (18-14) 

with asymptotic values given by Stirling's formula (18-5). Thus the 
t otal area of the hypersphere is [neglecting such factors as V2 and 

VI -(1/0)] 



77 77 277 

A = ftp = R0 - 1JW " 2 B x d6 x - J sin0 . 2 d V 2 / d ^ - 1 


fi* -')'&*- 4)' 6)> 

_ 2u(4>/ 2 >R<Z> - 1 
(}*-l)l 

„, pmU \(» " X )/2 / _ 2_x-(*/2) e(0 _ 2)/2 

^WJej(*/2) ; ^ (lg _ 15) 



160 STATISTICAL MECHANICS 

and the final expression for the volume of phase space occupied is 

fl - V N (47rmUe/3N)( 3 / 2 ) N (18-16) 

where we have used Eq. (18-5), have replaced [(<£- l)/2] by 
(1/2)0 = (3/2)N and have used the limiting formula for the exponen- 
tial function 

(l +~) — e x ; n-*°° (18-17) 

and the e in the formula for Q is, of course, the base of the natural 
logarithm, e = 2.71828. 

Consequently the entropy of the gas is, from Eq. (17-6), 

S = Nk ln[v(477mUe/3Nh 2 ) 3/2 ] 

or 

U = (3Nh a /4irme)V"^ e 2s / 3Nk (18-18) 

which is to be compared with Eq. (17-9) as well as with the discus- 
sion following Eq. (6-6). As with the discussion of Eq. (17-9), we can 
now obtain the thermodynamic temperature and pressure, 

T = (3U/3S) V = (h'^irmekJV-^e 28 / 3 ^ 

(18-19) 
P = _(au/aV) s = (Nh 2 /27rme)V-^ 3 e 2s / 3Nk = NkT/V 

Also, U = | NkT C v = (9 U/8 T) v = f Nk 

So the microcanonical ensemble does reproduce the thermodynamic 
behavior of a perfect gas, in complete detail. With Eq. (17-9) we 
were able to obtain the equation of state, but now that we have com- 
puted the dependence of Qp on U and N, the theoretical model also 
correctly predicts the dependence of U on T and thence the heat 
capacity of the gas. 

The Maxwell Distribution 

The microcanonical ensemble can also predict the velocity dis- 
tribution of molecules in the gas; if we have the distribution function 
for the whole gas we can obtain from it the distribution function for 
a constituent particle. Utilizing Eq. (18-19) we can restate our results 



THE MICROCANONICAL ENSEMBLE 161 

thus far. Since R = (2mU) ]/2 and U = (3/2)NkT, we can say that the 
systems in the microcanonical ensemble for a perfect gas of point 
particles are uniformly distributed on the surface of a hypersphere 
in 3 N- dimensional momentum space of radius (3NmkT) 1/2 . The prob- 
ability that the point representing the system is within any given re- 
gion dA on this surface is equal to the ratio of the area of the region 
dA to the total area A = fip of the surface. The region near where 
the pi axis cuts the surface, for example, corresponds to the micro- 
states in which the x component of momentum of particle 1 carries 
practically all the energy (l/2)0kT of the whole system. It is an 
interesting property of hypersphere s of large dimensionality (as we 
shall show) that the areas close to any axis are negligibly small com- 
pared to the areas well away from any axis (where the energy is 
relatively evenly divided between all the degrees of freedom). There- 
fore the chance that one degree of freedom will turn out to have most 
of the energy of the whole gas, (l/2)<£kT, and that the other components 
of momentum are zero is negligibly small. 

To show this, and incidentally to provide yet another ' 'derivation' ' 
of the Maxwell distribution, we note that the probability that the 
momentum coordinate, which we happen to have labeled by the sub- 
script 1, has a value between p x and pi + dpi can be obtained easily, 
since we have chosen our angle coordinates such that pi = R cos#i. 
Thus the probability 

dA = 1 (1/2U ! Sin ^ " 2 9l d01 Sin ^ ~ 3 ° 2 d ° 3 

... sin 6> _ 2 d6>0 _ 2 dfy _ 1 (18-20) 

as a function of 9 l9 is only large near Q 1 = (1/2)77 [i.e., where pi = 
(0mkT) 2 cos 61 is very small compared to (0mkT) 2 ] and drops off 
very rapidly, because of the large power of sin#i, whenever the mag- 
nitude of pi increases. The factor sin0 -%6i ensures that the prob- 
ability is very small that the degree of freedom labeled 1 carries 
most of the total kinetic energy (l/2)0kT of the gas. This would be 
true for each degree of freedom. It is much more likely that each 
degree of freedom carries an approximately equal share, each having 
an amount near (l/2)kT. 

The formula for the probability that degree of freedom 1 have 
momentum between pi and p x + dpi, irrespective of the values of the 
other momenta, is obtained by integrating dA/A over 2 , 3 ,...,0a _ \. 
Using the results of Eqs. (18-13) to (18-17) produces 



162 STATISTICAL MECHANICS 

1 [(1/2)0-1]! / _ p 2 \(0~3)/2 

(7T0mkT) V2 [(1/2)0 - 3/2] ! \ 0mkT/ Pl 

since -d^ = (l/^mkT^Mdpi/sinfli) and sin 2 6 l = 1 - (p?/0mkT). To 
obtain the Maxwell distribution in its usual form we utilize Eqs. 
(18-5) and (18-17) and consider factors like [l - {2/<$>)Y 2 and 
[l - (p?/(/)mkT)]" 3/2 to equal unity [but not such factors to the (1/2)0 
power, of course]. The calculations go as follows: 



f(Pl) dPl * (ZirrnkT)* 



1 - (2/0) 



1 - (3/0) 



(*/2) . 2 



x[l - (p!/0mkT)]^- 3 )/ 2 d Pl 
" (27rmkT) V2 ex P(-Pi/2mkT) d Pl (18-21) 

which is the familiar Maxwell distribution for one degree of freedom 
[see Eq. (12-7)]. 

This time we arrived at the Maxwell distribution as a consequence 
of requiring that the total kinetic energy of the gas be (l/2)0kT and 
that all possible distributions of this energy between the degrees 
of freedom be equally likely. For = 3N large, by far the majority 
of these configurations represent the energy being divided more or 
less equally between all degrees of freedom, with a variance for 
each p equal to 2m times the mean kinetic energy per degree of 
freedom, (l/2)kT. We note that the Maxwell distribution is not valid 
unless the individual atoms are, most of the time, unaffected by the 
other atoms, mutual collisions being rare events. 




The Canonical 

Ensemble 



The microcanonical ensemble has sufficed to demonstrate that the 
basic postulates of statistical mechanics correspond to the facts of 
thermodynamics as well as of kinetic theory. But it has several draw- 
backs, hindering its general use. In the first place, the computation 
of the number of microstates that have a given energy is not always 
easy. It actually would be easier to calculate average values with a 
distribution function that included a range of energies, rather than one 
that differs from zero only when the energy has a specific value. 

In the second place (and perhaps more importantly) the micro- 
canonical ensemble corresponds to a system with energy U, com- 
pletely isolated from the rest of the universe, which is not the way a 
thermodynamic system is usually prepared. We usually do not know 
the exact value of the system's energy; we much more often know its 
temperature, which means that we know its average energy. In other 
words, we do not usually deal with completely isolated systems, but 
we do often deal with systems kept in contact with a heat reservoir 
at a given temperature, so that its energy varies somewhat from in- 
stant to instant, but its time average is known. This changes the 
boundary conditions of Eq. (17-2) and the resulting distribution func- 
tion will differ from that of Eq. (18-1). 

Solving for the Distribution Function 

Suppose we prepare an ensemble as follows: Each system has the 
same number of particles N and has the same forces acting on the 
particles. Each system is placed in a furnace and brought to equi- 
librium at a specified temperature, with each system enclosed in a 
volume V. Thus, although we do not know the exact energy of any 
single system, we do require that the mean energy, averaged over 
the ensemble, has the relationships between S and T expressed in 
Eqs. (6-3) and (8-8), for example. The distribution function for such 



163 



164 STATISTICAL MECHANICS 

an ensemble, corresponding to a system in contact with a heat 
reservoir, should satisfy the following requirements: 

S = -kZ) i v l n fy is maximum, 

" v r (19 " 1) 

subject to Li v iy- 1 and £j p i v E p = \l, 

the internal energy. 

We solve for i v by using Lagrange multipliers. We require 

S + ot Yj v i v + Oi e Tj u fyEp be maximum, 

with a and a e adjusted so that (19-2) 

L v iu= 1 and E v i v E p = V t 



where U, for example, satisfies the equation U =F + TS. Setting the 
partials of this function, with respect to the f's, equal to zero we ob- 
tain 

-k lnfj, - k + a + o^e^v = 
or 

i v = exp[(a - k + a e E p )/k] (19-3) 

The value of the Lagrange multiplier a is adjusted to satisfy the 
first subsidiary condition, 

e (ao/k)-lj^ e a e E v /k= 1; e a cA = e/Z; Z =J^ e a eE u /k 



The value of the Lagrange multiplier a e is obtained by requiring that 
the sum Yj-^v^v should behave like the thermodynamic potential U. 
For example, the sum we are calling the entropy is related to the sum 
we are calling U, by virtue of Eq. (19-3), as follows: 

S =-kSf y Infy = -EMao - k +a e E v ) = -fao - k) - a e U 

v v 

where we have used Eq. (19-3) for lnfj, and also have used the fact 
that the f's must satisfy Yj^v = * ( tne firs t subsidiary condition). 

However, if U is to be the thermodynamic potential of Eqs. (6-3) 
and (8-8), this relation between S and U should correspond to the 
equation S = (-F + U)/T. Therefore the Lagrange multipliers must 
have the following values: 

a -k = F/T; a e = -(1/T) 

The solution of requirements (19-1) is therefore 



THE CANONICAL ENSEMBLE 165 

t v = (l/Z)e- E ^ kT ; Z =Ee" E ^ kT = e"F/kT 



S=kEf l /[lnZ +(E^/kT)]=2^-=-(|f-) 



(19-4) 



The ensemble corresponding to this distribution is called the 
canonical ensemble. The normalizing constant Z, considered as a 
function of T and V, is called the partition function. Part of the 
computational advantage of the canonical ensemble is the fact that 
all the thermodynamic functions can be computed from the partition 
function. For example, 

F = -kTlnZ; S = -(8F/8T) V ; P = -(3F/8V) T (19-5) 

When the separations between successive allowed energies E v 
are considerably less than kT, classical mechanics can be used and 
instead of sums over the quantum states v of the system we can use 
integrals over phase space. The distribution function is the proba- 
bility density f(q,p), and, for a system with </> degrees of freedom, 

f(q,p) = (l/h0Z)e- H fa>P)/ kT 

(19-6) 
Z = (l/h0)/- /e-H/kT dVq dV p 

where H(q,p) is the Hamiltonian function of the system, the kinetic 
plus potential energy, expressed in terms of the q's and p's [see 
Eqs. (13-9) and (16-1)]. From Z one can then obtain F, S, etc., as 
per Eq. (19-5). The H of Eq. (19-6) is the total energy of the system, 
whereas the H of Eq. (13-10) is the energy of a single molecule. One 
might say that the canonical distribution function is the Maxwell - 
Boltzmann distribution for a whole system. It is an exact solution, 
whereas the f of Eq. (13-10) for a molecule is only valid in the limit 
of vanishing interactions between molecules. 

General Properties of the Canonical Ensemble 

The first thing to notice about this ensemble is that the distribu- 
tion function is not constant; all values of the energy are present, but 
some of them are less likely to occur than others. In general the 
larger the energy H the smaller is f. However, to find out the prob- 
ability that the system has a particular value of the energy, we must 
multiply f by ft(H), the "area" of the surface of constant H in phase 
space. This usually increases rapidly with H; for example £2 = Wh^ - 
(27reH/3Nco) 3N for the simple crystal of Eq. (18-8) and ft ^ 
V N (47rmHe/3N)(3/2)N f or the perfect gas of Eq. (18-16). The product 



166 STATISTICAL MECHANICS 

^f(QjP) K ^(H)e~ H A T at first increases and then, for H large enough, 
the exponential function "takes over" and ffi eventually drops to zero 
as H-°°. 

The value of H that has the most representatives in the ensemble 
is the value for which £2e~"H/kT is maximum. For the gas this value 
is H = (3/2)NkT and for the crystal it is 3NkT; in each case it is 
equal to the average value U of energy of the ensemble. The number 
of systems in the ensemble with energy larger or smaller than this 
mean value U diminishes quite sharply as |H - Ul increases. Al- 
though some systems with H * U do occur, the mean fractional devi- 
ation from the mean (AH/U) of the canonical distribution turns out to 
be inversely proportional to V0 and thus is quite small when $ is 
large. Therefore the canonical ensemble really does not differ very 
much from the microcanonical ensemble; the chief difference is that 
it often is easier to handle mathematically. 

The advantages of this greater ease are immediately apparent 
when we wish to determine the general properties of a canonical en- 
semble. For these can be deduced the properties of the partition 
function Z. For example, in many cases the system consists of N 
subsystems, the i-th having 6^ degrees of freedom (so that = 
2^i = l 6 i^ each subs Y stem having negligible interaction with any 
other, although there may be strong forces holding each subsystem 
together. For a perfect gas of N molecules, the molecules are the 
subsystems, the number of degrees of freedom of each molecule 
being three times the number of particles per molecule. For a 
tightly bound crystal lattice the "subsystems" are the different 
normal modes of vibration of the crystal— and so on. Whenever such 
a separation is possible, the partition function turns out to be a 
product of N factors, one for each subsystem. 

To see why this is so, we note that if the subsystems are mutually 
independent, the Hamiltonian of the system is a sum of N separate 
terms, H =S?LiHj, where Hj is the energy of the j-th subsystem 
and is independent of the coordinates of any other subsystem. For a 
quantized system, the energy Ev 1 v 2 '"V(t is a sum of N separate 
terms, the term Ej being the allowed energy of the j-th subsystem, 
dependent on 6j quantum numbers only (call them v],v] +l,...,^j +6p 
and independent of all the quantum numbers of the other subsystems. 
The partition function is the sum of exp(-Ezy 1 ^ 2 ...y ( f ) /kT) = 
e -E 1 /kT... e -Ej/kT... e -EN/kT over ^11 quantum states of all sub- 
systems, 

Z= Jj exp(-Ei/ 1 j/ a ...i/ ( j ) /kT) =Zi-z 2 —zj...zn 

all v 

where (19-7) 

zj =X>xp[-Ej(i/j,i/j +1 ,...,j,j + 6j)/kT] 



THE CANONICAL ENSEMBLE 167 

the sum for zj being over all the quantum numbers of the j-th sub- 
system. 

For example, the energy of interaction between the magnetic field 
and the orientation of the atomic magnets in a paramagnetic solid is, 
to a good approximation, independent of the motion of translation or 
vibration of these and other atoms in the crystal. Consequently the 
magnetic term in the Hamiltonian, the corresponding factor in the 
partition function, and the resulting additive terms in F and S can 
be discussed and calculated separately from all the other factors and 
terms required to describe the thermodynamic properties of the 
paramagnetic material. This of course is what was done in Chapter 
13 [see Eqs. (13-16) to (13-18)]. 

The Effects of Quantization 

To follow this general discussion further, we need to say some- 
thing about the distribution of the quantized energy levels of the j-th 
subsystem. There will always be a lowest allowed level, which we 
can call E} 9 1. This may be multiple, of course; there may be gji 
different quantum states, all with this same lowest energy. The next 
lowest energy can be labeled Ej,2; it may have multiplicity gj2; and 
so on. Thus we have replaced the set of 6j quantum numbers for the 
j-th subsystem by the single index number v, which runs from 1 to 
°°, and for which Ej^ + i > Ej^, the v-th level having multiplicity gj^. 

Thus the j-th factor in the partition function can be written 



zj = S gjye- E JiV kT (19-8) 

v = l 

If the energy differences Ej2 - Eji and Ej3 - Ej2, between the 
lowest three allowed energy levels of the j-th subsystem are quite 
large compared to kT, then the second term in the sum for zj is 
small compared to the first and the third term is appreciably 
smaller yet, so that 

zj ~gjie-Ejl/kT[i +(g j2 /gjl)e-( E j2-Ejl)/kT] ( 19 . 9 ) 

for kT small compared to Ej2 - Eji. The factor in brackets be- 
comes practically independent of T when kT is small enough. 

The Helmholtz function for the system is a sum of terms, one 
for each subsystem, 

N 
F = -kT InZ = 2 Fj; F^-kTlnzi (19-10) 

j = l 



168 STATISTICAL MECHANICS 

and the entropy, pressure, and the other thermodynamic potentials 
are then also sums of terms, one for each subsystem. Whenever any 
one of the subsystems has energy levels separated farther apart than 
kT. the corresponding terms in F, S, and U have the limiting forms, 
obtained from Eq. (19-9), 

Fj - -kT lngji +Ejl - (gj2/gjl)kTe-( E j2-Ejl)/kT 



Sj -k lngji +(gj2/gjl 



k + 



Ej2 - gjl 



r (E j2 -Eji)/kT (19 _ n) 



Uj 



F j + SjT 



Ejl +(gj2/gjl)(Ej 2 - Ejl)e-(Ej2-Ejl)/kT 



Thus whenever the j-th subsystem has a single lowest state (gji = 
1, i.e., when the subsystem is a simple one) its entropy goes to zero 
when T is reduced so that kT is much smaller than the energy sep- 
aration between the two lowest quantum states of the subsystem. On 
the other hand, if the lowest state is multiple, S goes to k In gjl as 
T — 0. In either case, however, the heat capacity Cjv = (cUj/cT) v of 
the subsystem vanishes at T = 0. Since all the subsystems have non- 
zero separations between their energy levels, these results apply to 
all the subsystems, and thus to the whole system, when T is made 
small enough. We have thus "explained" the shape of the curve of 
Fig. 3-1 and the statements made at the beginning of Chapter 9 and 
the discussion following Eq. (18-10). 



The High- Temperature Limit 

When T is large enough so that many allowed levels of a sub- 
system are contained in a range of energy equal to kT, the exponen- 
tials in the partition function sum of Eq. (19-8) vary slowly enough 
with v so that the sum can be changed to a classical integral over 
phase space, of the form given in Eq. (19-6). In this case, of course, 
the dependence of Z on T and V is determined by the dependence of 
the Hamiltonian H on p and q. For example, if the subsystem is a 
particle in a perfect gas occupying a volume V, Hj = (pj x + Pjy + 
pj z )> 2m depends only on the momentum, and the factor in the parti- 
tion function for the j-th particle is 

zj = (1, h 3 ) Jj"/dV q /j/exp [-(p^ + p| + p|)/2mkT] dV p 

= (V/h 3 )(2-mkT) 32 (19-12) 

and if there are N particles, 



-NkT InV -fNkT ln&mkT/h 2 ); U 

[but see Eq. (21-13)] 



NkT 



THE CANONICAL ENSEMBLE 169 

On the other hand, if the ''subsystem" is one of the normal modes 
of vibration of a crystal, Hj = (pj/2m) + (mwjqj/2) depends on both q 
and p, so that 

Zj = (l/h)/e- m ^J ?c l!/ 2kT dqj / e -Pj/ 2mkT d PJ 

= 277kT/hct>j = kT/hu>j (19-13) 

and, if there are 3N modes 

3N 
F = kT £ ln(hwi) - 3NkT In (kT); U = 3NkT 

j=l 

the difference between U = (3/2)NkT and U = 3NkT being caused by 
the presence of the q's in the expression for H in the latter case. 

For intermediate temperatures we may have to use equations like 
(19-11) for those subsystems with widely spaced levels and classical 
equations like (19-12) or (19-13) for those with closely packed energy 
levels. The mean energy of the former subsystems is practically in- 
dependent of T, whereas the mean energy of the latter depends lin- 
early on T: thus only the latter contribute appreciably to the heat 
capacity of the whole. In a gas of diatomic molecules, for example, 
the energy levels of translational motion of the molecules are very 
closely packed, so that for T larger than 1°K, the classical integrals 
are valid for the translational motions, but the rotational, vibrational, 
and electronic motions only contribute to C v at higher temperatures. 



20 



Statistical 
Mechanics 
of a Crystal 



Two examples of the use of the canonical ensemble will be dis- 
cussed here; the thermal properties of a crystal lattice and those 
of a diatomic gas. Both of these systems have been discussed before, 
but we now have developed the techniques to enable us to work out 
their properties in detail and to answer the various questions and 
paradoxes that have been raised earlier. 

Normal Modes of Crystal Vibration 

For example, the simplified crystal model of Eq. (13-15) assumed 
that each atom in a crystal vibrated independently of any other and 
thus that every atom had the same frequency of vibration. This is 
obviously a poor approximation for a real crystal, and in this chap- 
ter we shall investigate a model in which the effects of one atom's 
motion on its near neighbors are included, at least approximately. 
We shall find that this slightly improved model, although still quite 
simplified, corresponds surprisingly well to the measured thermal 
behavior of most crystals. 

For comparison, however, we shall complete our discussion of 
the crystal model of Eq. (13-15), with no interaction between atoms 
and with all atomic frequencies equal. The allowed energy for the 
j-th degree of freedom is hu(yj + 1/2) (where v^ is an integer) and 
the allowed energy of the whole system is 



S v\ + E ; E = ±<f)fiu) + [(V - V ) 2 /2kV ] 

3=1 2 



E^,...,^ =fiw 



and therefore the partition function, Helmholtz function, and other 
quantities are 



170 



STATISTICAL MECHANICS OF A CRYSTAL 171 



z = e -Eo/kT ZiZ2 ... Z0 . zj= ge^^jATMl-e-WkT) 

F = -kT In Z = E + 3NkT In (1 - e -W kT ) 

S = -3Nk ln(l - e -W kT ) + [(3Nnw/T)/(e n ^/ kT - 1)] 



(20-1) 



U = E + [3NiW(e*W kT . i)] 



-{ 



E + 3Nhwe- WkT kT « nw 

E + 3NkT kT > na> 



C\ 



= 3N(na?) 2 eW k T _ J(3NhV/kT 2 )e- WkT kT <<c ftw 

v kT 2( e ti w /kT _ i) 2 1 3Nk kT > nw 

Where we have used the formula 



£x n = l/(l-x), |x|<l (20-2) 

n = 

to reduce the partition sums to closed formulas. These equations 
were first obtained by Einstein. The heat capacity C v is plotted in 
Fig. 20-1 as the dashed curve. It does go to zero as T becomes 
much smaller than "ftco/k, as we showed in Chapter 19 that all quan- 
tized systems must. Actually it goes to zero more decidedly than the 
experimental results show actual crystals do. We shall soon see that 
this discrepancy is caused by our model's neglect of atomic interac- 
tions. 

In an actual crystal the interaction forces between the atoms, which 
tend to bring each atom back to its equilibrium position, depend in a 
complicated way on the displacements of whole groups of atoms. If 
the displacements are small, the forces depend linearly on the rela- 
tive displacements and thus the potential energy is a combination of 
quadratic terms like (l/2)K^qf, depending on the displacements from 
equilibrium of one of the atoms [which were included in the simplified 
model of Eq. (20-1)] but also terms like (l/2)Kij(qi - qi) 2 , corre- 
sponding to a force of interaction between one atom and another. Al- 
though many of the Kjj's are small or zero, some are not. The total 
potential energy is thus 



172 



STATISTICAL MECHANICS 





_ T 




..,,..._ 


0.5 






— 






Debye -^ ff 


— 


.* 




Jl 


— 


fc 




7/ 




CO 








\ 




J . "^-Einstein 




> 




— 


O 




// 

// 
// 
// 




0.1 


- 


// 
/ / 
/ / 
/ / 
/ / 


_ 


0.05 


- 


- 




- 


/ / 
7 7 


- 




— 


A / 
/ / 


— 




_ , 


r f 

/ 
/ 


_ 












/ 








' 1 II 




0.01 


/ l 


l/l 1 1 1 1 1 1 i 1 1 1 1 


1 1 1 1 1 1 1 1 



0.05 



0.1 



0.5 



Fig. 20-1. Specific-heat curves for a crystal. Ordinate 
y for the Debye curve is T/0 = kT/hu> m ; or- 
dinate for the Einstein curve is 3kT/4hco. 
Circles are experimental points for graphite, 
triangles for KC1. 



3N 



IE 

i = l 



3N 



KiQi + J] K ij (( *i " *$ 
j>i 



3N 
ij=l 



An = Ki + 2 £ K ij; Aij = Aji = Kij 



Therefore the Hamiltonian for the crystal is 



STATISTICAL MECHANICS OF A CRYSTAL 173 

3N 3N 

H = ^L^ + \L A Wm (20-3) 

3=1 i,J=l 

Actually there are six coordinates not represented in the sum over 
the q's, those for the motion of the crystal as a rigid body; so the 
total number of coordinates in the second sum is 3N-6 rather than 
3N. However, 6 is so much smaller than 3N that we can ignore this 
discrepancy between the sums, by leaving out the kinetic energy of 
rigid motion and calling 3N-6 the same as 3N. 

The solution of a dynamical problem of this sort is discussed in 
all texts of dynamics. The matrix of coefficients Aij determines a 
set of normal coordinates, Q n , with conjugate momenta P n , in terms 
of which the Hamiltonian becomes a sum of separated terms, each of 
which is dependent on just one coordinate pair, 

3N 
H =|2] [d/ m n)Pn+ mnconQn] + [(V - V ) 2 /2kV ] (20-4) 

n = l 

Application of Hamilton's equations (16-1), (9H/3P n ) = Q n and 
(3H/8Q n ) = -£ n , results in a set of equations 

P n = m n Q n ; Qn + w n Qn = (20-5) 

which may be solved to obtain the classical solution Q n = Q0ne iwnt . 
Thus u n /27r is the frequency of oscillation of the n-th normal mode 
of oscillation of the crystal. 

These normal modes of the crystal are its various standing waves 
of free vibration. The lowest frequencies are in the sonic range, cor- 
responding to wavelengths a half or a third or a tenth of the dimensions 
of the crystal. The highest frequencies are in the infrared and corre- 
spond to wavelengths of the size of the interatomic distances. Be- 
cause there are 3N degrees of freedom there are 3N different stand- 
ing waves (or rather 3N-6 of them, to be pedantically accurate); some 
of them are compressional waves and some are shear waves. 

Quantum States for the Normal Modes 

According to Eq. (16-7), the allowed energies of a single normal 
mode, with Hamiltonian (l/2m-j)Pj + (l/2)mjcjjQj are given by the for- 
mula hwJi'j + (1/2)], where v^ is an integer, the quantum number of 
the j-th normal mode. Sometimes the quantized standing waves are 



174 STATISTICAL MECHANICS 

called phonons; v^ is the number of phonons in the j-th wave. Micro- 
state v of the crystal corresponds to a particular choice of value for 
each of the iVs. The energy of the phonons in microstate v is then 

3N 3N 

E„ = E (V) +h £ ">)vy, Eo = [(V - Vo)/2KV ]+|h^ o>j 

j=1 ' J-l 

(20-6) 

each term in the sum being the energy of a different standing wave. 
The difference between this and the less accurate Einstein formulas 
of Eqs. (20-1) is that in the previous case the co's were the same 
for all the oscillators, whereas inclusion of atomic interaction in the 
present model has spread out the resonant frequencies, so that each 
standing wave has a different value of u>. 

According to Eq. (19-4) the partition function is 



/-JCo -ll Z.COjZAjN 

2 exp 1 j= e~ E o/ kT z 1 z 2 ---z3N 



all i/j's 



where 



Zi =Se- fiw J^/ kT = (1 - e- n ^j/kT)-i (20-7) 

"J 

and thus, from Eq. (19-5), the Helmholtz function for the crystal is 

3N 
F = -kT InZ = E (V) +kT S ln(l - e~ ft wj/kT) (20-8) 

1=1 

We can then compute the probability i v that the system is in the 
microstate specified by the quantum numbers u = v lf v 2 > ••• ^3N- Jt 
is the product [see Eq. (19-4)]. 

f I , = (l/Z)e-E l ,/kT =fl f 2f3 ... f 3 N 

where 

fj = (l/zjJe-^J^/H = e-^j^jAT - e"^j(^j +l)/kT (20-9) 

is the probability that the j-th standing wave of thermal vibration is 






STATISTICAL MECHANICS OF A CRYSTAL 175 

in the v$-th quantum state. The probability that the crystal is in the 
microstate v is of course the product of the probabilities that the 
various normal modes are in their corresponding states. 

When kT is small compared to hwj for all the standing waves of 
crystal vibration, all the zj's are practically unity, F is approxi- 
mately equal to E (V), independent of T, and the entropy is very 
small. When kT is large compared to any ho)j, each of the terms in 
parentheses in Eq. (20-8) will be approximately equal to hu)j/kT and 
consequently the Helmholtz function will contain a term -3NkT ln(kT), 
the temperature -dependent term in the entropy will be 3Nk InkT, and 
the heat capacity will be 3Nk = 3nR, as expected. To find values for 
the intermediate temperatures we must carry out the summation over 
j in Eq. (20-8) or, what is satisfactory here, we must approximate 
the summation by an integral and then carry out the integration. 

Summing over the Normal Modes 

The crucial question in changing from sum to integral is: How 
many standing waves are there with frequencies (times 2-n) between 
(jo and & + dw? There are three kinds of waves in a crystal, a set of 
compressional waves and two sets of mutually perpendicular shear 
waves. If the crystal is a rectangular parallelopiped of dimensions 
lx, ly, lz, the pressure distribution of one of the compressional waves 
would be 

p = ojQj sin(7rkjx/l x ) sin(7rmjy/ly) sin(7rnjz/l z ) 

where Qj(t) is the amplitude of the normal mode j, with equations of 
motion (20-5), a is the proportionality constant relating Qj and the 
pressure amplitude of the compressional wave, and kj,mj,nj are in- 
tegers giving the number of standing-wave nodes along the x, y, and z 
axes, respectively, for the j-th wave. 

The value of cdj, 2ti times the frequency of the j-th mode, is given 
by the familiar formula 

w] = (77ckj/l x ) 2 + (7rcmj/l y ) 2 + (7rcnj/l z ) 2 (20-10) 

where c is the velocity of the compressional wave. Each different j 
corresponds to a different trio of integers kj,mj,nj. A similar dis- 
cussion will arrive at a similar formula for each of the shear-wave 
sets, except that the value of c is that appropriate for shear waves. 
The problem is to determine how many allowed u>\'s have values be- 
tween u) and go +du. 

To visualize the problem, imagine the allowed cuj's to be plotted 
as points in "w space," as shown in Fig. 20-2. They form a lattice 



176 



STATISTICAL MECHANICS 




Representation of allowed values of co in cu 
space. 



of points in the first octant of the space, with a spacing in the "u) x " 
direction of 7rc/l x , a spacing in the "u>y" direction of 7rc/ly, and a 
spacing in the "w z " direction of ttc/1 z , with the allowed value of co 
given by the distance from the origin to the point in question, as 
shown by the form of Eq. (20-10). The point closest to the origin can 
be labeled j = 1, the next j = 2, etc. The spacing between the allowed 
points is therefore such that there are, on the average, l x l y l z /7r 3 c 3 = 
V/u 3 c 3 points in a unit volume of "oj space," where V = lxlylz * s th e 
volume occupied by the crystal. 

Therefore all the allowed coj's having value less than co are rep- 
resented by those points inside a sphere of radius co (with center at 
the origin). The volume of the part of the sphere in the first octant is 
(1/8)(47tw 3 /3) and, because there are VA 3 c 3 allowed points per unit 
volume, there must be (V/77 3 c 3 )(7ru> 3 /6) standing waves with values of 
ooi less than u>. Differentiating this with respect to a>, we see that the 
average number of goj's with value between co and u> + du> is 



dj = (V/2t7 2 c 3 )co 2 dw 



(20-11) 



Several comments must be made about this formula. In the first 
place, the formula is for just one of the three sets of standing waves, 
and thus the dj for all the normal modes is the sum of three such 
formulas, each with its appropriate value of c, the wave velocity. 
But we can combine the three by using an average value of c, and say 
that, approximately, the total number of standing waves with a>j's 
between w and oo + dco is 



STATISTICAL MECHANICS OF A CRYSTAL 177 



dj = (3V/27r 2 c 3 )w 2 du> (20-12) 

where c is an appropriate average of the wave velocities of the com- 
pressional and shear waves. Next we should note that Eq. (20-11) 
was derived for a crystal of rectangular shape. However, a more- 
detailed analysis of standing waves in crystals of more-general 
shapes shows that these equations still hold for the other shapes as 
long as V is the crystal volume. For a differently shaped crystal, 
the lattice of allowed points in go space is not that shown in Fig. 20-2, 
but in spite of this the density of allowed points in co space is the 
same, V/7T 3 c 3 . 

Next we remind ourselves that there is an upper limit to the al- 
lowed values of the coj's; in fact there can only be 3N different nor- 
mal modes in a crystal with N atoms (3N-6, to be pedantically 
exact). Therefore our integrations should go to an upper limit co m , 
where 

3N o; m ct> m 

3N = E 1 = / dj = (3V/2ti 2 c 3 )/ co 2 du = (Vu> 3 m /27T 2 c 3 ) 
j=l ° 

or 

w m = (677 2 Nc 3 /V) j/3 (20-13) 

Finally we note that both Eqs. (20-12) and (20-13) are approxima- 
tions of the true state of things, first because we have tacitly as- 
sumed that c is independent of co, which is not exactly true at the 
higher frequencies, and second because we have assumed that the 
highest compressional frequency is the same as the highest shear 
frequency, namely, co m /27r, and this is not correct either. All we can 
do is to hope our approximations tend to average out and that our 
final result will correspond reasonably well to the measured facts. 

The Debye Formulas 

Returning to Eq. (20-8), we change from a sum over j to an inte- 
gral over dj, using Eq. (20-12) and integrating by parts; we obtain 

F = [(V - Vo) 2 /2/cV ] + / Igliwj + kT ln(l - e - nw j/ kT ) dj 
= E (V) + (3kTV/27i 2 c 3 ) / ln(l - e'WkT)^ dw 



178 STATISTICAL MECHANICS 

277 2 c 3 6tt 2 c 3 \ kT 

(20-14) 
where 



E - 


[(V-V )V2kV ]=| 




dj = (3V?ia>£ 


,/M"V) 


The function D, defined by the 


integral 






D(x) 


X 

= (3/x 3 ) / [z 3 dz/(e z 




-1)]- 


(V/5X 3 
\l - (3/8)x 


x» 1 

x <T 1 

(20-15) 



is called the De bye function, after the originator of the formula. 

We now can express the temperature scale in terms of the Debye 
temperature 6 = fiw m /k (which is a function of V) and then write 
down the thermodynamic functions of interest, 

F = [(V-V ) 2 /2kV ] +|Nk6 + NkT[3 In (1 - e" / T ) - D(0/T)] 

[(V - V ) 2 /2kV ] + | NkS - (7T 4 NkT 4 /56 3 ) T < 6 



[(V-V ) 2 /2/cV ] + ±NkO + 3NkT ln(0/T)-NkT 



2 



T»0 



S = Nk -3 ln(l-e^/ T ) + 4D ~ 

*(47r 4 NkTV50 3 ) T«e 

3Nk In (Te 4/3 /£) T>£ 
U = [(V - V ) 2 /2kV ] + |Nk6 + U V (T); U v = 3NkT D(6/T) 

(20-16) 



C v = 3Nk 



I . (30/T) 
T e */ T -l 



(^T^NkT 3 ^ 3 ) T<C# 

3Nk T > 



STATISTICAL MECHANICS OF A CRYSTAL 



179 



P = [(V - V)/kV ] - | Nk0' - 3NkT(0'/0)D(0/T) 



[(V - V)//cV ] - |Nk0' - |7r 4 Nk^'(T/e) 4 T «6 



[(V -V)AV ] - 3NkT(0'/0) 



T»0 



where d' = d0/dV == (h/k)(du> m /dV) is a negative quantity. Referring 
to Eq. (3-6) we see that the empirical equation of state is approxi- 
mately the same as the last line of Eqs. (20-16) if -(3Nk0'/0) is equal 
to P/k of the empirical formula. This relationship can be used to 
predict values of if 0' can be computed, or it can be used to deter- 
mine 6' from measurements of j3 and k. 

The functions D(x) = [xU v (0/x)/3Nk0] and C v /3Nk are given in 
Table 20-1 as functions of x = 0/T. 







Table 


20-1 






X 


D(x) 


C v /3Nk 


X 


D(x) 


C v /3Nk 


0.0 


1.0000 


1.0000 


4.0 


0.1817 


0.5031 


0.1 


0.9627 


0.9995 


5.0 


0.1177 


0.3689 


0.2 


0.9270 


0.9980 


6.0 


0.0776 


0.2657 


0.5 


0.8250 


0.9882 


8.0 


0.0369 


0.1382 


1.0 


0.6745 


0.9518 


10 


0.0193 


0.0759 


1.5 


0.5473 


0.8960 


12 


0.0113 


0.0448 


2.0 


0.4411 


0.8259 


15 


0.0056 


0.0230 


2.5 


0.3540 


0.7466 


20 


0.0024 


0.0098 


3.0 


0.2833 


0.6630 


25 


0.0012 


0.0050 



A curve of C v /3Nk versus T/0 is given in Fig. 20-1 (solid curve), 



Comparison with Experiment 

Several checks with experiment are possible. By adjusting the value 
of we can fit the curve for C v , predicted by Eq. (20-16) and drawn 
in Fig. 20-1, to the experimental curve. That the fit is excellent can 
be seen from the check between the circles and triangles and the solid 
line. We see, for example, that the Debye formula, which takes into 
account (approximately) the coupling between atomic vibrations, fits 
better than the Einstein formula, which neglects interaction, the dis- 
crepancy being greatest at low temperatures. 

From the fit one of course obtains an empirical value of =liu> m /k 
for each crystal measured, and thus a value of u) m for each crystal. 
However, by actually measuring the standing-wave frequencies of the 



180 



STATISTICAL MECHANICS 



crystal and by summing as per Eq. (20-13), we can find out what a> m 
(and thus 6) ought to be, and then check it against the 6 that gives the 
best fit for C v . These checks are also quite good, as can be seen 
from Table 20-2. 

Table 20-2 



Substance 



0, °K 



K, 



from C v fitting from elastic data 



NaCl 

KC1 

Ag 

Zn 



308 
230 
237 
308 



320 
246 
216 
305 



Thus formulas (20-16) represent a very good check with experi- 
ment for many crystals. A few differences do occur, however, some 
of which can be explained by using a somewhat more complicated 
model. In a few cases, lithium for example, the normal modes are so 
distributed that the approximation of Eq. (20-12) for the number of 
normal modes with ujj's between w and oj + da> is not good enough, 
and a better approximation must be used [which modifies Eqs. (20-13) 
and (20-14)] . In the case of most metals the C v does not fit the Debye 
curve at very low temperatures (below about 2°K); in this region the 
C v for metals turns out to be more nearly linearly dependent on T 
than proportional to T 3 , as the Debye formula predicts. The discrep- 
ancy is caused by the free electrons present in metals, as will be 
shown later. 




Statistical 
Mechanics 
of a Gas 



We turn now to the low-density gas phase. A gas, filling volume V, is 
composed of N similar molecules which are far enough apart so the 
forces between molecules are small compared to the forces within a 
molecule. At first we assume that the intermolecular forces are neg- 
ligible. This does not mean that the forces are completely nonexistent; 
there must be occasional collisions between molecules so that the gas 
can come to equilibrium. We do assume, however, that the collisions 
are rare enough so that the mean potential energy of interaction be- 
tween molecules is negligible compared to the mean kinetic energy of 
the molecules. 

Factoring the Partition Function 

The total energy of the system will therefore be just the sum of the 
separate energies e(^ mo i e ) of the individual molecules, each one de- 
pending only on their own quantum numbers (which we can symbolize 
by ^mole) an( * the partition function can be split into N molecular 
factors, as explained in Eq.(19-7): 

z= ( z mole) N ; z mole = La exp [-€(^ mole )/kT] 

^mole 

[but see Eq. (21-12)]. 

In this case the partition function can be still further factored, for 
the energy of each molecule can be split into an energy of translation 
H tr of the molecule as a whole, an energy of rotation H ro j-, as a rigid 

body, an energy of vibration H yib of the constituent atoms with respect 
to the molecular center of mass, and finally an energy of electronic 
motion E e ^: 

H mole = H tr + H rot + H vib + H el (21-1) 

181 



182 STATISTICAL MECHANICS 

To the first approximation these energy terms are independent; the 
coordinates that describe Ht r , for example, do not enter the functions 
H ro t, H v ib, or H e i unless we include the effect of collisions, and we 
have assumed this effect to be negligible. This independence is not 
strictly true for the effects of rotation, of course; the rotation does 
affect the molecular vibration and its electronic states to some extent. 
But the effects are usually small and can be neglected to begin with. 

Consequently, each molecular partition function can be, approxi- 
mately, split into four separate factors', 

z mole = z tr ' z rot* z vib ' z el 

and the partition function for the system can be divided correspond- 
ingly, 

Z= Zt r -Z ro t -Z v ib- Z e i 

where 

N - N 

Ztr = (ztr) > Z rot = (z rot ) , etc. (21-2) 

The individual molecular factors are sums of exponential terms, each 
corresponding to a possible state of the individual molecule, with 
quantized energies, 

z tr = 2-j ex P(- e kmn/ kT) ; z rot = zL g\v exp(-c^VkT) 

k,m,n A,f 

(21-3) 

and so on, where k,m,n are the quantum numbers for the state of 
translational motion of the molecule, \,v those for rotation, etc., and 
where g^ are the multiplicities of the rotational states (the g's for 
the translational states are all 1, so they are not written). 

The energy separation between successive translational states is 
very much smaller than the separation between successive rotational 
states, and these are usually much smaller than the separations be- 
tween successive vibrational states of the molecule; the separations 
between electronic states are still another order of magnitude larger. 
To standardize the formulas, we shall choose the energy origin so the 
c for the first state is zero; thus the first term in each z sum is 
unity. 

The Translational Factor 

Therefore there is a range of temperature within which several 
terms in the sum for zj- r are nonnegligible, but only the first term is 
nonnegligible for z rot , z v ^, and z e j. In this range of temperature 



STATISTICAL MECHANICS OF A GAS 183 



the total partition function for the gas system has the simple form 



z tr ~ 



2 i ex P f-CtannATJ 
k,m,n 



(21-4) 



all the other factors being practically equal to unity. To compute Z 
for this range of temperature we first compute the energies ekmn anc * 
then carry out the summation. From it we can calculate F, S, etc., 
for a gas of low density at low temperatures . 

The Schrbdinger equation (16-6) for the translational motion of a 
molecule of mass M is 

ft 2 /2M)[ 2 */ax 2 ) + 2 */ay 2 ) + OVaz 2 )] = -* tr * 

If the gas is in a rectangular box of dimensions l x ^y^z and volume 
V = tyjiylzi with perfectly reflecting walls, the allowed wave functions 
and energies turn out to be 

^kmn = A sin (irkx/?. x ) • sin (7rmy/£ y ) • sin (7rnz/£ z ) 

4mn = (^ 2 ^ 2 /2M)[(k/£ x ) 2 + (m/* y ) 2 + (n/* z ) 2 ] = p 2 /2M (21-5) 

where p is the momentum of the molecule in state k,m,n. For a mol- 
ecule of molecular weight 30 and for a box 1 cm on a side, 7r 2 "h 2 /2M£ 2 
« 10" 38 joule. Since k =* 10~ 23 joule/°K, the spacing of the transla- 
tional levels is very much smaller than kT even when T = 1°K, and 
we can safely change the sum for z^ r into an integral over dk, dm, 
and dn, 

tr = //"/ «P f 4KF + (§) 2 + (ft) 2 ]} dk - «* 

= (V/h 3 )(27rMkT) 3/2 [but see Eq. (21-13)] (21-6) 

by using Eqs. (12-6) (note that we have changed from ft back to 
h= 27r-n). 

This result has been obtained by summing over the quantized states 
But with the levels so closely spaced we should not have difficulty in 
obtaining the same result by integrating over phase space. The trans- 
lational Hamiltonian is p 2 /2M and the integral is 

Ztr = f_l $ exp L2^kT < p x + p y + Pz)] h " 3 dx dy dz dp x d Py d Pz 
= (V/h 3 )(27rMkT) 3/2 [but see Eq. (21-13)] (21-7) 



184 STATISTICAL MECHANICS 

as before. Integration in (21-7) goes just the same as in (21-6), ex- 
cept that we integrate over px,Py,Pz from -<*> to + °°, whereas we in- 
tegrated over k,m,n from to °°; the result is the same. 

The probability ffcmn that a molecule has translational quantum 
numbers k,m,n is thus (l/z^ r ) exp (-ej^n/kT) and the probability 
density that a molecule has translational momentum p and is located 
at r in V is 

f(q,p) = (l/V)(27rMkT)- 3/2 exp (-p 2 /2MkT) 

which is the Maxwell distribution again. Also, in the range of temper- 
ature where only Z± r changes appreciably with temperature, the 
Helmholtz function and the entropy of the gas are 

F = -kT ln(Z tr ) = -NkT Tln(V) + 1 ln(27TMkT/h 2 )l 

S = Nk [ln(V) + | ln(27rMkT/h 2 )l + |Nk (21-8) 

[but see Eq. (21-14)] 

There is a ro ijor defect in this pair of formulas. Neither F nor S 
satisfies the requirement that it be an extensive variable, as illus- 
trated in regard to U in the discussion preceding Eq. (6-3) (see also 
the last paragraph in Chapter 8). Keeping intensive variables constant, 
increasing the amount of material in the system by a factor A should 
increase all extensive variables by the same factor A. If we increase 
N to AN in formulas (21-8), the temperature term will increase by 
the factor A but the volume term will become ANk In (AV), which is 
not A times Nk ln(V). The corresponding terms in Eqs. (8-21), giving 
the thermodynamic properties of a perfect gas, are Nk ln(V/V ), and 
when N changes to AN, V goes to AV and also V goes to AV , so that 
the term becomes ANk ln(AV/AV ), which is just A times Nk ln(V/V ). 
Evidently the term Nk ln(V) in (21-8) should be Nk ln(V/N), or some- 
thing like it, and thus the partition function of (21-7) should have had 
an extra factor N" 1 , or the partition function for the gas should have 
had an extra factor N"" N (or something like it). The trouble with the 
canonical ensemble for a gas seems to be in the way we set up the 
partition function. 

If we remember Stirling's formula (18-5) we might guess that 
somehow we should have divided the Z of Eq. (21-1) by N! to obtain 
the correct partition function for the gas. The resolution of this di- 
lemma, which is another aspect of Gibbs' paradox, mentioned at the 
end of Chapter 6, lies in the degree of distinguishability of individual 
molecules. 



STATISTICAL MECHANICS OF A GAS 185 

The Indistinguishability of Molecules 

Before the advent of quantum mechanics we somehow imagined that, 
in principle, we could distinguish one molecule from another— that we 
could paint one blue, for example, so we could always tell which one 
was the blue one. This is reflected in our counting of trans lational 
states of the gas, for we talked as though we could distinguish between 
the state, where molecule 1 has energy e x and molecule 2 has energy 
e 2 ,from the state, where molecule 1 has energy e 2 and molecule 2 has 
energy e lt for example. But quantum mechanics has taught us that we 
cannot so distinguish between molecules; a state where molecule 1 has 
quantum numbers k 1 ,m 1 ,h 1 , molecule 2 has k2,m 2 ,n 2 , and so on, not 
only has the same energy as the one where we have reshuffled the 
quantum numbers among the molecules, it is really the same state, 
and should only be counted once, not N! times. We have learned that 
physical reality is represented by the wave function, and that the 
square of a wave function gives us the probability of presence of a 
molecule but does not specify which molecule is present. Different 
states correspond to different wave functions, not to different permu- 
tations of molecules. 

At first sight the answer to this whole set of problems would seem 
to be to divide Z by N!. If particles are distinguishable, there are N! 
different ways in which we can assign N molecules to N different 
quantum states. If the molecules are indistinguishable there is only 
one state instead of N! ones. This is a good-enough answer for our 
present purposes. But the correct answer is not so simple as this, as 
we shall indicate briefly here and investigate in detail later. The dif- 
ficulty is that, for many states of some systems, the N particles are 
not distributed among N different quantum states; sometimes several 
molecules occupy the same state. 

To illustrate the problem, let us consider a system with five par- 
ticles, each of which can be in quantum state with zero energy or 
else in quantum state 1 with energy e. The possible energy levels Ej, 
of the system of five particles are, therefore, 

all five particles in lower state 
one particle in upper state, four in lower 
two particles in upper state, three in lower 
three particles in upper state, two in lower 
four particles in upper state, one in lower 
all five particles in upper state 

(Note that we must distinguish between system states, with energies 
E Pf and particle states, with energies and €.) There is only one 
system state with energy E , no matter how we count states. All par- 
ticles are in the lower particle state and there is no question of which 
particle is in which state. In this respect, a particle state is like the 



E 


= 


Et 


= e 


E 2 


= 2e 


E 3 


= 3e 


E 4 


= 4e 


E R 


= 5£ 



186 STATISTICAL MECHANICS 



mathematician's urn, from which he draws balls; ordering of parti- 
cles inside a single urn has no meaning; they are either in the urn or 
not. 

Distinguishability does come into the counting of the system states 
having energy E lf however. If we can distinguish between particles 
we shall have to say that five different system states have energy E 1 ; 
one with particle 1 in the upper state and the others all in the lower 
"urn," another with particle 2 excited and 1, 3, 4, and 5 in the ground 
state, and so on. In other words the multiplicity g x of Eq. (19-8) is 5 
for the system state v - 1. On the other hand, if we cannot distinguish 
between particles, there is only one state with energy E l5 the one with 
one particle excited and four in the lower state (and it has no meaning 
to ask which particle is excited; they all are at one time or other, but 
only one is excited at a time). 

For distinguishable particles, a count of the different ways we can 
put five particles into two urns, two in one urn, and three in the other, 
will show that the appropriate multiplicity for energy E 2 is g 2 = 10. 
And so on; g 3 = 10, g 4 = 5, g 5 = 1. Therefore, for distinguishable par- 
ticles, the partition function for this simple system would be 



Z = 



J2 gpe U6/kT = 1 + 5x + 10x 2 + 10x 3 + 5x 4 + 
^ = 



= (1 + x) 5 where x = e ' 

and where we have used the binomial theorem to take the last step. 
Thus such a partition function factors into single -particle factors 

z = 1 + e" €//kT , as was assumed in Eqs. (19-8) and (24-3). 

On the other hand, if the particles are indistinguishable, all the 
multiplicities g are unity and 

Z = 1 + x + x 2 + x 3 + x 4 + x 5 

which does not factor into five single-particle factors. 

Counting the System States 

Generalizing, we can say that if we have N distinguishable parti- 
cles, distributed among M different quantum states, nj of them in 

M 
particle state j, with energy e^ (so that Y] n i = N), then the number of 

J = 1 
different ways we can distribute these N particles among" the M par- 

M 
«.. B ,.,„U»,,*.,, te „.n. WE ,. )?i » i « i ,U 



STATISTICAL MECHANICS OF A GAS 187 



N!, the number of different ways all N particles can be permuted, 
being reduced by the numbers nj ! of different ways the particles 
could be permuted in each of the M "urns," since permutation inside 
an urn does not count. The Z for distinguishable particles then is 



z dist = J2 Zv exp ( " Zj nj 6j/kT 



= V — x ni x n2 ••• x nM = z N (21-10) 

v 

where xi = exp (-e^/kT), z = x x + x 2 + ••• x^, where the sum is over 

J M 

all values of the m's for which V n i = N, and where we have used 

j=i J 

the multinomial theorem to make the last step. Again this partition 
function factors into single-particle factors. 

Again, if the particles are indistinguishable, the partition function 
is 



Z ind 



;rx^-x n M (21-n) 



with the sum again over all values of the n-j's for which £nj - N. 
This sum does not factor into single-particle factors. 

We thus have reached a basic difficulty with the canonical ensem- 
ble. As long as we could consider the particles in the gas as distin- 
guishable, our partition functions came out in a form that could be fac- 
tored into N z's, one for each separate particle. As we have seen, 
this makes the calculations relatively simple. If we now have to use 
the canonical ensemble for indistinguishable particles, this factorabil- 
ity is no longer possible, and the calculations become much more dif- 
ficult. In later chapters we shall find that a more general ensemble 
enables us to deal with indistinguishable particles nearly as easily as 
with distinguishable ones. But in this chapter we are investigating 
whether, under some circumstances, the partition function for the 
canonical ensemble can be modified so that indistinguishability can 
approximately be taken into account, still retaining the factorability 
we have found so useful. Can we divide Z^^ of Eq. (21-10) by some 
single factor so it is, at least" approximately, equal to the Z mc j of 
Eq. (21-11)? 



188 STATISTICAL MECHANICS 

There is a large number of terms in the sum of (21-10) which have 
multiplicity g^ = N! These are the ones for the system states v, for 
which all the nj's are or 1, for which no particle state is occupied 
by more than one particle. We shall call these system states the 
sparse states, since the possible particle states are sparsely occu- 
pied. On the other hand, there are other terms in (21-10) with multi- 
plicity less than N!. These are the terms for which one or more of 
the n^'s are larger than 1; some particle states are occupied by more 
than one of the particles. Such system states can be called dense 
states, for some particle states are densely occupied. If the number 
and magnitude of the terms for the sparse states in (21-10) are much 
larger than the number and magnitude of the terms for the dense 
states, then it will not be a bad approximation to say that all the gj/s 
in (21-10) are equal to N! and thus that (Zdist/ N - f ) does not differ 
much from the correct Z^. And (Zdist/N!) can still be factored, 
although Zi n d cannot. 

To see when this advantageous situation will occur, we should ex- 
amine the relative sizes of the terms in the sum of Eq. (21-10). The 

term for which the factor x 1 1 ••• x ^ is largest is the one for which 

n x = N, nj = (j > 1) (i.e., for which all particles are in the lowest 
state). This term has the value 1 • exp (-Nc^kT). It is one of the 
"densest" states. The largest term for a sparse state is the one for 
which n x = n 2 = ••• = n^ = 1, nj = (j >N) (i.e., for which one particle 
is in the lowest state, one in the next, and so on up to the N-th state). 
Its value is 

(N!) exp [-(€,+ e 2 + ••• + e N )/kT] 

« V2ttN exp (N[ln(N/e) - (e N /kT)]} 

where we have used Stirling's formula (18-5) for N! and we have writ- 
ten e~N for the average energy [(e 1 + e 2 + ••• + €n)/N] of the first N par- 
ticle states. Consequently, whenever ln(N/e) is considerably larger 
than (e"jyf - e 1 )/kT, the sum of sparse-state terms in Zdist is so much 
larger than the sum of dense-state terms that Zdist is practically 
equal to a sum of the sparse-state terms only, and in this case Zdist 
ea NlZind* Tn i s situation is the case when kT is considerably larger 
than the spacing between particle-state energy levels, which is the 
case when classical mechanics holds. 

The Classical Correction Factor 

Therefore whenever the individual particles in the system have en- 
ergy levels sufficiently closely packed, compared to kT, so that clas- 
sical phase-space integrals can be used for at least part of the z 



STATISTICAL MECHANICS OF A GAS 189 

factor, it will be a good approximation to correct for the lack of dis- 
tinguishability of the molecules by dividing Zdist by N ! . In this case 
there are enough low-lying levels so that each particle can occupy a 
different quantum state and our initial impulse, to divide Z by N!, 
the number of different ways in which we can assign N molecules to 
N different states, was a good one. Instead of Eq. (21-1) we can use 
the approximate formula 

Z - (l/N!)(z mo ie) N - (ez m0 le/N) N (21-12) 



(omitting the factor V27iN in the second form). 

Since the translational energy levels of a gas are so closely spaced, 
this method of correcting for the indistinguishability of the molecules 
should be valid for T > 1°K. 

The correction factor can be included in the translational factor, so 
that, instead of Eqs. (21-4) to (21-8), we should use 

Z tr = (l/N!)V N (27TMkT/h 2 ) (3/2)N ^(eV/N) N (27rMkT/h 2 ) (3/2)N 

- (eV/n£ 3 T ) N (21-13) 

where n = N/N is th e numbe r of moles and where the "thermal 
length" i T = hNj /3 /V27rMkT is equal to 1.47 x 10" 2 meters for pro- 
tons at T = 1°K (for other molecules or temperatures divide by the 
square root of the molecular weight or of T). The values of the trans- 
lational parts of the various thermodynamic quantities for the gas, 
corrected for molecular indistinguishability, are then 

F tr = -NkT["ln(eV/N) + | In (27iMkT/h 2 ) 1 

Str = Nk[ln (V/N) + | In (27rMkT/h 2 )l + | Nk 

U = |NkT; C v = |Nk; P = (NkT/V) 

H = |NkT; C p =|Nk (21-14) 

The equation for S is called the Sac kur-Tetrode formula. 

Comparison with Eqs. (8-21) shows that statistical mechanics has 
indeed predicted the thermodynamic properties of a perfect gas. It 
has done more, however; it has given the value of the constants of in- 
tegration S , T , and V in terms of the atomic constants h, M, and k, 
and it has indicated the conditions under which a collection of N mol- 
ecules can behave like a perfect gas of point particles. 



190 STATISTICAL MECHANICS 

We also discover that we can now solve Gibbs' paradox, stated at 
the end of Chapter 6. Mixing two different gases does change the en- 
tropy by the amount given in Eq. (6-14). But mixing together two por- 
tions of the same gas produces no change in entropy. If the molecules 
on both sides of the diaphragm are identical, there is really no in- 
crease in disorder after the diaphragm is removed. One can never 
tell (in fact one must never even ask) from which side of the diaphragm 
a given molecule came, so one cannot say that the two collections of 
identical molecules "intermixed" after the diaphragm was removed. 

We also note that division by N! was not required for the crystal 
discussed in Chapter 20. In a manner of speaking, N! was already 
divided out. We never tried to include, in our count, the number of 
ways the N atoms could be assigned to the different lattice points, and 
so we did not have to divide out the number again. More will be said 
about this in Chapter 27. 

The Effects of Molecular Interaction 

We have shown several times [see Eqs. (17-9), (18-19), and (21-14)] 
that, when the interaction between separate molecules in a gas is ne- 
glected completely, the resulting equation of state is that of a perfect 
gas. Before we finish discussing the translational partition function 
for a gas, we should show how the effects of molecular interaction can 
be taken into account. We shall confine our discussion to modifications 
of the translational terms, since these are the most affected. Molecu- 
lar interactions do change the rotational, vibrational, and electronic 
motions of each molecule, but the effects are smaller. 

The first effect of molecular interactions is to destroy the factor- 
ability of the translational partition function, at least partly. The 
translational energy, instead of being solely dependent on the molecu- 
lar momenta, now has a potential energy term, dependent on the rela- 
tive positions of the various molecules. This is a sum of terms, one 
for each pair of molecules. The force of interaction between molecule 
i and molecule j, to the first approximation, depends only on the dis- 
tance ry between their centers of mass. It is zero when ry is large; 
as the molecules come closer together than their average distance the 
force is first weakly attractive until, at r^ equal to twice the "radius" 
r of each molecule, they "collide" and their closer approach is pre- 
vented by a strong repulsive force. Thus the potential energy Wij(rij) 
of interaction between molecule i and molecule j has the form shown 
in Fig. 21-1, with a small positive slope (attractive force) for r^ > 
2r and a large negative slope (repulsive force) for rij < 2r . By the 
time rij is as large as the average distance between molecules in the 
gas, Wy is zero; in other words we still are assuming that the ma- 
jority of the time the molecules do not affect each other. 

The translational part of the Hamiltonian of the system is 



STATISTICAL MECHANICS OF A GAS 



191 




Fig. 21-1. Potential energy of interaction between two mole- 
cules as a function of their distance apart. 



N 






H tr = (1/2M) Z] Pi + La W ij( r ij) 
i = 1 all pairs 



(21-15) 



where the sum of the Wy's is over all the (1/2)N(N- 1) « (1/2JN 2 
pairs of molecules in the gas. The translational partition function 
is then 

z tr = / — /e~ tr dxi dx 2 ••• dz N dp x l dp y i ••• dp Z N/h 



The integration over the momentum coordinates can be carried 
through as with Eq. (21-7) and, since the molecules are indistin- 
guishable, we divide the result by N!. However the integration over 
the position coordinates is not just V^ this time, because of the pres- 
ence of the Wij's, 

1 /2*MkT\ (3 / 2)N 



Ztr - Zp * Zq ; 



Z P^N! 



h 2 ) 



e\ N /2,MkT\ (3 / 2)N 



z a = /—/exp 



N/ \ h 2 ) 
- H WytryJAT 

_ all pairs 



dx x dy x ••• dy N dz N 

(21-16) 



Let us look at the behavior of the integrand for Zq, as a function of 



192 STATISTICAL MECHANICS 

the coordinates x 1 ,y 1 ,z 1 of one molecule. The range of integration is 
over the volume V of the container. Over the great majority of this 
volume the molecule will be far enough away from all other molecules 
so that £Wij is and the exponential is 1; only when molecule 1 
comes close to another molecule (the j-th one, say) does rjj become 
small enough for Wjj to differ appreciably from zero. Of course if 
r-H becomes smaller than 2r , W]i becomes very large positive and 
the integrand for Zq will vanish. The chance that two molecules get 
closer together than 2r is quite small. 

Thus it is useful to add and subtract 1 from the integrand, 

Zq = / "'/{I + [exp (-£Wij/kT) _ x]} dXi ... dzN 

= V N + /.-./[expt-^Wij/kT)- 1] dx,." dz N (21-17) 

where the first unity in the braces can be integrated as in Eq. (21-7) 
and the second term is a correction to the perfect gas partition func- 
tion, to take molecular interaction approximately into account. As we 
have just been showing, over most of the range of the position coordi- 
nates the integrand of this correction term is zero. Only when one of 
the r^j's is relatively small is any of the W^j's different from zero. 
To the first approximation, we can assume that only one Wj-j differs 
from zero at a time, as the integration takes place. 

Thus the integral becomes a sum of similar integrals, one for each 
of the (1/2)N(N- 1) ^ (1/2)N 2 interaction terms W^. A typical one is 
the integral for which Wij is not zero; for this one the integrand dif- 
fers from zero only when (x 1 ,y 1 ,z 1 ) is near (xj,yj,Zj), so in the integra- 
tion over dK x dy x dz x = dV\ we could use the relative coordinates 
rij,#lj,01j. Once this integral is carried out, the integrand for the 
rest of the integrations is constant, so each of the integrals over the 
other dVi's is equal to V. Thus 

Z q = V N + I N* fd^u / sin fli] dflx, /( e - W lj/ kT - l)r 2 Xj drjj 
*/- /dv, -dV N 



^tiffv"" 1 



^/(e-WlJ^-l^drJ 



When r^j < 2r , Wij becomes very large positive and the integrand 
of the last term becomes -1, so this part of the quantity in brackets 
is just minus the volume of a sphere of radius 2r , which we shall 

call -2/3. For rjj > 2r , Wij is small and negative, so (e~ W lj/ kT - 1) 
- -(Wij/kT), and this part of the quantity in brackets is roughly 



STATISTICAL MECHANICS OF A GAS 193 

oo 

-4tt / (Wij/kT) r|j drij 
2r o 
which we shall call 2a /kT. 

The Van der Waals Equation of State 

Therefore, to the first approximation, molecular interaction 
changes Ztr from the simple expression of Eq. (21-13) to 

where Nj3 = N(87rr£/3) is proportional to the total part of the volume V 
which is made unavailable to a molecule because of the presence of the 
other molecules, and where a? is a measure of the attractive potential 
surrounding each molecule. The /3 and a. terms in the bracket are 
both small compared to 1. 

The Helmholtz function and the entropy for this partition function 
are 

F tr - - | NkT l„pfl)-N k T l„(f )-« in (l- ^ + ^) 
.-| NkTln (^)- NkTln (f) + NkT (f)_(^) 



— |NHln(*pr 



)-«"- [f(-f)]-(^) 



S tr - §Nk + §Nk ln(^») + Nk In [|(V- N0)] (21-19) 

Comparison with Eqs. (21-14) shows that U and C v are unchanged, 
to this approximation, by the introduction of molecular interaction. 
However the equation of state becomes 



\dVlm ~ V - 



NkT N 2 a 



T V - N/3 V 
or 



(p + ^r)(V - Nj3) =* NkT = nRT (21-20) 

which is the Van der Waals equation of state of Eq. (3-4), with 

a = N*,a and b = N /3. The correction N 2 a/V 2 to p (which tends to 

decrease P for a given V and T) is caused by the small mutual 



194 STATISTICAL MECHANICS 

attractions between molecules; the correction N/3 to V (which tends 
to increase P for a given V and T) is the volume excluded by the 
presence of the other molecules. Thus measurement of a and b from 
the empirical equation of state can give us clues to molecular sizes 
and attractive forces; or else computation of the forces between like 
molecules can enable us to predict the Van der Waals equation of 
state that a gas of these molecules should obey. 




A Gas of 

Diatomic 
Molecules 



In the molecular gas described in the preceding chapter, as long as 
kT is small compared to the energy spacing of rotational quantum lev- 
els of individual molecules, only Ztr differs appreciably from unity and 
the gas behaves like a perfect gas of point atoms (if we neglect molecu- 
lar interactions). To see for what temperature range this holds, we 
need to know the expression for the allowed energies of free rotation 
of a molecule. This expression is quite complicated for polyatomic 
molecules, so we shall go into detail only for diatomic molecules. 

The Rotational Factor 

If the two constituent nuclei have masses M x and M^ and if they 
are held a distance R apart at equilibrium, the moment of inertia of 
the molecule, for rotation about an axis perpendicular to R through 
the center of mass, is I = [M^gR 2 ,/^! + M 2 )] . The moment of inertia 
about the R axis is zero. The kinetic energy of rotation is then 1/21 
times the square of the total angular momentum of the molecule. 

This angular momentum is quantized, of course, the allowed values 
of its square being "h 2 £(£ + 1), where I is the rotational quantum num- 
ber, and the allowed values of the component along some fixed direc- 
tion in space are one of the (2L+ 1) values -£li, -(£ - l)fi, ... , + 
(i - l)h, +IH, for each value of I. Put another way, there are 2£ + 1 
different rotational states which have the energy (*h 2 /2I)£U + 1), so the 
partition function for the rotational states of the gas system of N mol- 
ecules is 

f 2 "IN 

Zrot = l Z] (2£+1) ex PMrot* U+1 )/T]l (22-D 

where # ro t = ti 2 /2Ik. Therefore when T is very small compared to 
0rob z rot - 1 and > according to the discussion following Eq. (19-11), 
the rotational entropy and specific heat are negligible. 

195 



196 STATISTICAL MECHANICS 

Values of # ro t f° r a * ew diatomic molecules will indicate at what 
temperatures Z ro t begins to be important. For H 2 , # ro t = 85 °K; for 
HD, e T0t = 64°K; for D 2 , £ ro t = 47 °K; for HC1, ro t = 15°K; and for 
2 , ro t - 2°K. Therefore, except for protium (hydrogen), protium 
deuteride, and deuterium gases, T is appreciably larger than ro t in 
the temperature range where the system is a gas. 

In these higher ranges of temperature we can change the sum for 
Z into an integral, 

_ _ , ,N 
z rot ~ ^ z voV 

1 
- f(2l+ 1) exp[-6 rot (£ 2 +£)/T] di = T/6 TOt 



so 

Zrot * (T/^rot) N = (87T 2 IkT/h 2 ) N 

F rot - -NkT In (T/0 rot ); S rot - Nk ln(eT/0 rot ) 

U rot -NkT; C^ 0t -Nk, T»0 rot (22-2) 

Thus for a gas of diatomic molecules at moderate temperatures, 
where both translational and rotational partition functions have their 
classical values, the total internal energy is (5/2)NkT and the total 
heat capacity is (5/2)Nk, as mentioned in the discussion following Eq. 
(13-11). The rotational terms add nothing to the equation of state, 
however, for the effect of the neighboring molecules on a molecule's 
rotational states is negligible for a gas of moderate or low densities; 
consequently Z ro t and F ro t are independent of V. Therefore the 
equation of state is determined entirely by Zt r , unless the gas density 
is so great that not even the Van der Waals equation of state is valid. 

For hydrogen and deuterium, a more careful evaluation of Eq. 
(22-1) results in 



z rot~* ^ 



1 + 3e -2£rot/T\ N , T<6 TOt 

Ng rot /4T7 t + \_ + ie rot 



\6 T0t 12 480T / ' ^ rot 



rot 
A plot of the exact value of C v /Nk, plotted against T/0 rot is shown 

rot 
in Fig. 22-1. We note that C v rises somewhat above Nk = nR, as T 

increases, before it settles down to its classical value. The measured 



A GAS OF DIATOMIC MOLECULES 



197 




Fig. 22-1. The rotational part of the heat capacity of a di- 
atomic gas as a function of temperature. 

values of Cy 0t for HD fit this curve very well, from T = 35 °K to sev- 
eral hundred degrees K, when molecular vibration begins to make it- 
self felt. But the C£ ot curves (i.e., C v - cty) for H 2 and D 2 do not 
match, no matter how one juggles the assumed values of 6 ro i; f° r ex ~ 
ample, the curve for H 2 has no range of T for which C$ ot > Nk, and 
the peak for D 2 is not as large as Fig. 22-1 would predict. The expla- 
nation of this anomaly lies again with the effects of indistinguis liability 
of particles. The hydrogen and deuterium homonuclear molecules, H 2 
and D 2 , are the only ones with a low-enough boiling point so that these 
effects can be measured. The effects would not be expected for HD, 
for here the two nuclei in the molecule differ and are thus distinguish- 
able. The calculations for H 2 and D 2 will be discussed later, in Chap- 
ter 27, after we take up in detail the effects of indistinguishability. 



The Gas at Moderate Temperatures 

Therefore, for all gases except H 2 , HD, and D 2 , over the temper- 
ature range from the boiling point of the gas to the temperature v i D > 
where vibrational effects begin to be noticeable, the only effective fac- 
tors in the partition function are those for translation and rotation, 
and these factors can be computed classically, using Eqs. (21-14) and 
(22-2). In this range we can also calculate the partition function for 
polyatomic molecules. The classical Hamiltonian is (Pi/21^ + (p 2 /2I 2 ) 
+ (P3/2I3), where l lf I 2 , I3 are the moments of inertia of the molecule 
about its three principle axes and p x , p 2 , p 3 are the corresponding an- 
gular momenta. Therefore, 



198 statistical mechanics 

, [(p!A) + (pI/i 2 ) + ( 

Z rot * 1 (877 2 /h 3 ) J J J exp 



J rot " V /**> J J J «■* \ 2kT 

N 



x dpj dp 2 dp, 

= [(8ir 2 /ah 3 )VirT^ (27ikT) 3/2 ] N (22-4) 

where Su 2 is the factor produced by the integration over the angles 
conjugate to Pi,p 2 >P3 and where a is a symmetry factor, which enters 
when two or more indistinguishable nuclei are present in a molecule. 
(If the molecule is asymmetric, cr= 1; if it has one plane of symmetry, 
a = 2; etc.) 

We can now write the thermodynamic functions for a gas for which 
molecular interactions are negligible, for the temperature range where 
kT is large compared with rotational-energy-level differences but 
small compared with the vibrational- energy spacing; 

For monatomic gases, there is no Z ro t and, from Eq. (21-14), 



F « F tr =* -NkT 



ln(V/N) + |lnT + F J 



U^|NkT; C v ^|Nk; P « NkT/V 



For diatomic gases, use Eq. (22-2) for Z ro t, and 



F - F tr + F rot - -NkT 



ln(V/N) + | In T + F 1 



U « |NkT; C v « |Nk; P « NkT/V (22-5) 

For polyatomic gases, use Eq. (22-4) for Z ro t, and 

F « Ftp + F rot « -NkT [in (V/N) + 3 In T + F ] 

U * 3NkT; C v « 3Nk; P « NkT/V 

where the constant F is a logarithmic function of k, h, the mass M 
of the molecule, and of its moments of inertia, the value of which can 
be computed from Eqs. (21-14) and (22-2) or (22-4). All these for- 
mulas are for perfect gases, in that the equation of state is PV = NkT 
and the internal energy U is a function of T only. The specific heats 
depend on the nature of the molecule, whether it is monatomic, di- 
atomic, or polyatomic. 



A GAS OF DIATOMIC MOLECULES 



199 



We note that the result corresponds to the classical equipartition 
of energy for translational and rotational motion, U being (l/2)kT 
times the number of ' 'unfrozen" degrees of translational and rota- 
tional freedom. The effects of molecular interaction can be allowed 
for approximately by adding the factor in brackets in Eq. (21-18) to Z. 
These results check quite well with the experimental measurements, 
mentioned following Eq. (13-11). 

The Vibrational Factor 

When the temperature is high enough so that kT begins to equal 
the spacing between vibrational levels of the molecules, then Z Y ^ be- 
gins to depend on T and the vibrational degrees of freedom begin to 
"thaw out." In diatomic molecules there is just one such degree of 
freedom, the distance R between the nuclei. The corresponding poten- 
tial energy W(R) has its minimum value at R , the equilibrium sep- 
aration between the nuclei, and has a shape roughly like that shown in 
Fig. 22-2. As R — «>, the molecule dissociates into separate atoms; 




R — ^ 

Fig. 22-2. Diatomic molecular energy W(R) as a function 
of the separation R between nuclei. 



the energy required to dissociate a molecule from equilibrium is D, 
the dissociation energy. 

If the molecule is rotating there will be added a dynamic potential, 
corresponding to the centrifugal force, which is proportional to the 
square of the molecule's angular momentum and inversely proportional 



200 STATISTICAL MECHANICS 

to R 3 . Fortunately, for most diatomic molecules, this term, which 
would couple Z ro t and Z v ifc, is small enough so we can neglect it 
here. For small- amplitude vibrations about R the system acts like 
a harmonic oscillator, with a natural frequency u)/2n which is a func- 
tion of the nuclear masses and of the curvature of the W(R) curve 
near R . Thus the lower energy levels are "nu>(n + 1/2), where n is 
the vibrational quantum number. 

Therefore, to the degree of approximation which neglects coupling 
between rotation and vibration and which considers all the vibrational 
levels to be those of a harmonic oscillator, 



z vib ~ 




[eV^J 



\-i-N 
,-lWkTV 



(22-6) 



where e = W(R ) + (l/2)fico, and where we have used Eq. (20-2) to re- 
duce the sum. The corresponding contributions to the Helmholtz func- 
tion, entropy, etc., of the gas are 



kT<Hw 
kT>1ia> 



F vib^ 


Ne + NkT lnfl - e 


-*w/kT-\ 




u vib- 


Ne + 


e +liwAT _ x 


fNe + Nna;e-^ /k 
[Ne + NkT 


r vib ~ 


NfiV 
kT 2 


e Uw/kT 






/1iw/kT _ 


0' 






-{ 


(Nn 2 a> 2 /kT 2 ) e 
Nk 


-■Bw/kT 


kT <*Ctfw 
kT >Hco 



(22-7) 

which are added to the functions of Eqs. (22-5) whenever the tempera- 
ture is high enough (for T equal to or larger than 6 v fo ="nu>/k). 

As examples of the limits above which these terms become appre- 
ciable, the quantity vib is equal to 2200°K for 2 , to 4100°Kfor 
HC1 and to 6100°K for H 2 (Fig. 22-3). Therefore below roughly 
1000°K the contribution of molecular vibration to S, U, and C v of 
diatomic gases is small. Above several thousand degrees, the vibra- 
tional degree of freedom becomes "unfrozen," an additional energy 
kT is added per molecule, and an additional Nk to C v [a degree of 



A GAS OF DIATOMIC MOLECULES 
nR 



201 



>> 




Fig. 22-3. Vibrational part of the heat capacity of a di- 
atomic molecular gas as a function of temper- 
ature. 

freedom with quadratic potential classically has energy kT; see 
Eq. (13-14)]. 

In the case of a polyatomic molecule with n nuclei, there are 
3n- 6 vibrational degrees of freedom, each with its fundamental 
frequency u>j/27T. The vibrational partition function is 



Ne 



^vib _ e 

z r ( l . e - 1l V kT ) 



Z X Z 2 ... Z 3n _ 6 

-N 



where 



(22-8) 



[compare this with Eq. (20-7)] . Again, for polyatomic gases, the vi- 
brational contribution below about 1000°K is small, at higher T the 
contribution to U is N(3n- 6)kT. It often happens that the molecules 
are dissociated into their constituent atoms before the temperature is 
high enough for the vibrational term to "unfreeze." 

The temperature would have to be still higher before Z e \ began to 
have any effect. The usual electronic- level separation divided by k is 
roughly equal to 10,000 °K at which temperatures most gases are dis- 
sociated and partly ionized. Such cases are important in the study of 
stellar interiors, but are too complex to discuss in this book. And be- 
fore we can discuss the thermal properties of electrons we must re- 
turn to first principles again. 




The Grand 
Canonical 

Ensemble 



The canonical ensemble, representing a system of N particles kept 
at constant temperature T, has proved to be a useful model for such 
systems as the simple crystal and the perfect (or nearly perfect) gas. 
Many other systems, more complicated than these, can also be repre- 
sented by the canonical ensemble, which makes it possible to express 
their thermodynamic properties in terms of their atomic structure. 
But in Chapter 21 we discovered a major defect, not in the accuracy 
of the canonical ensemble when correctly applied, but in its ease of 
manipulation in some important cases. 

Whenever the N particles making up the system are identical and 
indistinguishable, the corresponding change in the multiplicity factors 
g v has the result that the correct partition function does not separate 
into a product of N independent factors, even if the interaction be- 
tween particles is negligible. In cases where kT is large compared 
to the separation between quantum levels of the system, we found we 
could take this effect into account approximately by dividing by N!. In 
this chapter we shall discuss a more general kind of ensemble, which 
will allow us to retain f actorability of partition function and at the 
same time take indistinguis liability into account exactly, no matter 
what value T has. 

An Ensemble with Variable N 

The new ensemble, which we shall call the grand canonical ensem- 
ble, is one in which we relax the requirement that we placed on the 
microcanonical and canonical ensembles— that each system in the en- 
semble has exactly N particles. We can imagine an infinitely large, 
homogeneous supersystem kept at constant T and P. The system the 
new ensemble will represent is that part of the supersystem contained 
within a volume V. We can imagine obtaining one of the sample sys- 
tems of the ensemble by withdrawing that part of the supersystem 
which happens to be in a volume V at the instant of removal, and of 



202 



THE GRAND CANONICAL ENSEMBLE 203 

doing this successively to obtain all the samples that make up the en- 
semble. Not only will each of the samples differ somewhat in regard 
to their total energy, but the number of particles N in each sample 
will differ from sample to sample. Only the average energy U and the 
average number of particles N, averaged over the ensemble, will be 
specified. 

The equations and subsidiary conditions serving to determine the 
distribution function are thus still more relaxed than for the canonical 
ensemble. A microstate of the grand canonical ensemble is specified 
by the number of particles N that the sample system has, and by the 
quantum numbers i?jj = u lf v 2 , ... , ^3^, which the sample may have 
and which will specify its energy E N ^. Thus for an equilibrium mac- 
rostate the distribution function f-$ v must satisfy the following re- 
quirements: 



S = -k X! f N ln f N^ is maximum 



subject to 



L fN^ =1 ; L E Nl/ f N ^ = U; Z>f N „ = N 



> (23-D 



where U, N, and S are related by the usual thermodynamic relation- 
ships, such as U = TS + SI + N/i, for example, or any other of the 
equations (8-21). Function £2 is the grand potential of Eq. (8-15). Note 
that, jnstead of n, the mean number of moles in the system, we now 
use N, the mean number of particles, and therefore ju. is now the 
chemical potential (the Gibbs function) per particle, rather than per 
mole, as it was in the first third of this book. We shall consistently 
use it thus henceforth, so it should not be confusing to use the same 
symbol, /1. 

The Grand Partition Function 

As before, we simplify the requirements by using Lagrange multi- 
pliers, and require that 

~ k E f N^ lnfNiz + af! 2] f Ni , + a e £ E Nl ,f Nl , 
N,i> N,i> N,y 

+ a n Zj Nf^j, be maximum, (23-2) 

with a l9 a e , a n chosen so that 



204 STATISTICAL MECHANICS 

E f N^ =1 ; E En^N^U; E Nf Nj , = N (23-2) 

N,^ N,z^ N,v 

The partials with respect to the fNi/ s > which must be made zero, re- 
sult in the equations 

k In f Nj , + k = a 1 + a n N + a e E Nl , 
or 

f N „ = exp[(l/k)(a 1 -k + a n H + a n E Nl ,)] (23-3) 

The first requirement is met by setting 

e (k- Q ' 1 )/k = ^ = J^ e (a e N + a e E Np )/k 

The other two are met by inserting this into the expression for S, 

S = - E f Ni>( Q 'i"" k + a n N + «e E Nt>) = (k- o^) - a n N- a e U 
N,y 

and then identifying this with the equations S = (U - J2 - Njll)/T, from 
Eq. (8-21). 

We see that we must have 

k-a 1 = -(Sl/T) = k In 5; ot n = M/T; a e = -(1/T) 

so that the solution of Eq. (23-1) is 

*N„ = (V3) exp ^ — — J ; 3 = /_> exp ^ — j^— J 

12 = -kT In 3 = -PV; (aft/a^iV = -N 

OJ2/3T) VjUL = -S; 0O/8V) T/1 = -P (23-4) 

C v = T(aS/8T) V/i ; F = ft + /iN; U = F + ST = ft + ST+jiiN 

These are the equations for the grand canonical ensemble. The 
sum ? is called the grand partition function; it is the sum of the ca- 
nonical partition functions Z(N) for ensembles with different N's, 
with weighting factors e^N/kT^ 



THE GRAND CANONICAL ENSEMBLE 205 

3 = £ e^ N/kT Z(N); Z(N) = £ e" E ^/ kT (23-5) 

N = v 

All the thermodynamic properties of the system can be obtained from 
£1 by differentiation, as with the canonical ensemble. We shall see that 
this partition function has even greater possibilities for factoring than 
does its canonical counterpart. 

The Perfect Gas Once More 

Just to show how this ensemble works we take up again the familiar 
theme of the perfect gas of point particles. From Eq. (21-13) we see 
that, if we take particle indistinguishability approximately into ac- 
count, the canonical partition function for the gas of N particles is 



,N 



Z(N) « (l/N!)(V/je|) ; t t = (h/V2?rMkT 

and therefore the grand partition function is, from Eq. (23-5), 

N 



N = 



(l/N!)[(V/£?)e M/kT ] =exp [(V/£j)e /i/kT ] (23-6) 



where we have used the series expression e = £] (x /n!). 
Then, from Eqs. (23-4) n = 

Q= -kTV(27rMkT/h 2 ) 3/2 e M//kT = -PV 

N = V(277MkT/h 2 ) 3/2 e M//kT = -(J2/kT) = PV/kT 

S = kV(277MkT/h 2 )3/ 2 e /i/kT (| >^) = Nk(| - ^) 

U = S2+ST + /iN=-NkT + NkT(| - ^) + /iN = |fikT 

F = N(ju-kT); jll = -kT ln[(V/N)(27rMkT/h 2 ) 3 / 2 ] (23-7) 

which of course present, in slightly different form, the same expres- 
sions for U, C v , and the equation of state as did the other ensembles; 
only now N occurs instead of N. We also obtain directly an expres- 
sion for the chemical potential per particle, jul, for the perfect gas. 

The probability density that a volume V of such a gas, in equilib- 
rium at temperature T and chemical potential /i, happens to contain 



206 



STATISTICAL MECHANICS 






N particles, and that these particles should have momenta p x p 2 , ... , 
PN and be located at the points specified by the vectors r lf r 2 , ... , r^ , 
is then 



fN^'P) 



3N 



N! / exp ikT 



N 



p.N 



f— »\2M/ 



_ v / 27rMkT \ 3/2 e jLiAT 

(23-8) 



This is a generalization of the Maxwell distribution. The expression 
not only gives us the distribution in momentum of the N particles 
which happen to be in volume V at that instant (it is of course inde- 
pendent of their positions in V), but it also predicts the probability 
that there will be N molecules in volume V then. If we should wish 
to use P and T to specify the equilibrium state, instead of \i and T, 
this probability density would become 



%(q,p) 



(P/kT) 



N 



N! (2*MkT)( 3 /2)N 



exp 



N 



L 



3 Pv 



2MkT 



(23-9) 



Density Fluctuations in a Gas 

By summing f^ over v for a given N (or by integrating in(cup) 
over the q's and p's for a given N) we shall obtain the probability 
that a volume V of the gas, at equilibrium at pressure P and temper- 
ature T, will happen to have N molecules in it. From Eqs. (23-4) 
and (23-5) this is 



fN = E f N^ = < 1 /?)e 



^ kT Z(N) = e ( ^ + ^ N) / kT Z(N) 



Using the expressions for Z(N) and those for (£2/kT) and (ju/kT), we 
obtain 



N 



N 



fN\ pf__f 2 ] N (±\ 
\V) \2nMkTj \N!/ 



277MkT \ 3/2 
h 2 ) 



nN 



= (N N /N!)e" 



N =(l/N!)(PV/kT) N e- pV / kT 



(23-10) 



This is a Poisson distribution [see Eq. (11-5)] for the number of 
particles in a volume V of the gas. The mean number of particles is 
N = PV/kT and the probability % is greatest for N near N in value. 
But fjsj is not zero when N differs from N; it is perfectly possible to 
find a volume V in the gas which has a greater or smaller number of 
molecules in it than PV/kT. The variance of the number present is 



THE GRAND CANONICAL ENSEMBLE 207 

00 OO 

(AN) 2 = Y] (N-N) 2 f N - V N 2 f N -(N) 2 -N = PV/kT 

N = N = (23-11) 

and the fractional deviation from the mean is 

AN/N = /N = 1/AT/PV (23-12) 

(It should be remembered that the system described by the grand ca- 
nonical ensemble is not a gas of N molecules in a volume V, but that 
part of a supersystem which happens to be in a volume V, where V 
is much smaller than the volume occupied by the supersystem; thus 
the number of particles N that might be present can vary from zero 
to practically infinity.) 

The smaller the volume of the gas looked at (the smaller the value 
of N) the greater is this fractional fluctuation of number of particles 
present (or of density, for AN/N = Ap/p). Thus we have arrived at 
the result of Eq. (15-6), for the density fluctuations in various por- 
tions of a gas, by a quite different route. 




Quantum 
Statistics 



But we still have not demonstrated the full utility of the grand ca- 
nonical ensemble for handling calculations involving indistinguishable 
particles. The example in the previous chapter used the approximate 
correction factor (1/N!), which we saw in Chapter 21 was not valid at 
low temperatures or high densities. We must now complete the dis- 
cussion of the counting of states, which was begun there. 

Occupation Numbers 

In comparing the partition functions for distinguishable and indis- 
tinguishable particles, given in Eqs. (21-10) and (21-11) for the canon- 
ical ensemble, we saw that it was easier to compare the two if we 
talked about the number of particles occupying a given particle state 
rather than talking about which particle is in which state. In fact if 
the particles are indistinguishable it makes no sense to talk about 
which particle is in which state. We were there forced to describe the 
system state v by specifying the number nj of particles which occupy 
the j-th particle state, each of them having energy e-j. The number nj 
are called occupation numbers. 

Of course if the interaction between particles is strong (as is the 
case with a crystal) we cannot talk about separate particle states; oc- 
cupation numbers lose their specific meaning and we have to talk about 
normal modes instead of particles. But let us start with the particle 
interactions being small enough so we can talk about particle states 
and their occupation numbers. The results we obtain will turn out to be 
capable of extension to the strong-interaction case. 

We thus assume that, in the system of N particles, it makes sense 
to talk about the various quantum states of an individual particle, which 
we call particle states. These states are ranked in order of increasing 
energy, so that if ej is the energy of a particle in state j, then £j + l 
> ej. Instead of specifying the system state v by listing what state 
particle 1 is in, and so on for each particle, we specify it by saying 



208 



QUANTUM STATISTICS 209 

how many particles are in state j (i.e., by specifying n-j). Thus when 
the system is in state v = (n^, ... , n-j, ...), the total number of par- 
ticles and the total energy are 

j j 

For the canonical ensemble, we have to construct the partition 
function Z for a system with exactly N particles; the sum over 
v includes only those values of the n-j's for which their sum comes 
out to equal N. This restriction makes the calculation of a partition 
function like that of Eq. (21-11) more difficult than it needs to be. 
With the grand canonical ensemble the limitation to a specific value of 
N is removed and the summation can be carried out over all the oc- 
cupation numbers with no hampering restriction regarding their total 
sum. 

Thus the grand partition function can be written in a form analogous 
to the Z of Eqs. (19-8) and (21-10), 

5 = ■ £ g v exp [(1/kT) £ nj (/i - €])] (24-2) 

by virtue of Eqs. (24-1). The multiplicities g u , for each system state 
v (i.e., for each different set of occupation numbers n-j) are chosen 
according to the degree of distinguishability of the particles involved. 
Indeed, this way of writing Q, is appropriate also when the "parti- 
cles" are identical subsystems, such as the molecules of a gas. In 
such cases the "particle states" j are the molecular quantum states, 
specified by their trans lational, rotational, vibrational, and electronic 

tr rot 

quantum numbers, the allowed energies €j are the sums €k mn + e^ u 

+ e n + e of Chapter 21, and the n-j's are the number of molecules 
that have the same totality of quantum numbers j = k,m,n,A.,^,n, etc. 
However, we shall postpone discussion of this generalization until 
Chapter 27. 

Maxwell- Boltzmann Particles 

At present we wish to utilize the grand canonical ensemble to in- 
vestigate systems of "elementary" particles, such as electrons or 
protons or photons or the like, sufficiently separated so that their mu- 
tual interactions are negligible. Each particle in such a system will 
have the same mass m and will be subject to the same conservative 
forces, so that the Schrodinger equation for each will be the same. 
Therefore, the total set of allowed quantum numbers, represented by 
the index j, will be the same for each particle (although at any instant 



210 



STATISTICAL MECHANICS 



different particles may have different values of j). The allowed en- 
ergy corresponding to the set of quantum numbers represented by j 
is ej and the number of particles in this state is nj. The grand parti- 
tion function can then be written as in Eq. (24-2). 

The values of the multiplicities g p must now be determined for 
each kind of fundamental particle. This to some extent is determined 
by the nature of the forces acting on the particles. For example, if 
the particle has a spin and a corresponding magnetic moment, if no 
magnetic field is present the several allowed orientations of spin will 
have the same energy, and the g's will reflect this fact. Let us avoid 
this complication at first, and assume that each particle state j has 
an energy ej which differs from that of any other particle state. 

When the elementary particles are distinguishable, the discussion 
leading up to Eq. (21-9) indicates that g Vf the number of different ways 
N particles can be assigned to the various particle states, nj of them 
being in the j-th particle state, is [Nl/n^n^ •••]. Since 0!= 1, we 
can consider all the n's as being represented in the denominator, even 
those for states unoccupied; an infinite product of l's is still 1. The 
grand partition function will thus have the same kind of terms as in 
Eq. (21-10), but the restriction on the summation is now removed; all 
values of the nj's are allowed. Therefore, for distinguishable parti- 
cles, 



3d 



ist 



L 

n x ,n 2 , . 



(n x + n 2 + 



n x ! n 2 ! 



)! 



exp 



l^£ n J (M 
J 



ei) 



(24-3) 



it reduces to the sum 



N 



In contrast to Eq. (20-10), this sum is not separable into a simple 
product of one-particle factors; because of the lack of limitation on N 

£ e j , each term of which is 

a product of one-particle partition functions. 

In the classical limit, when there are many particle states in the 
range of energy equal to kT, the chance of two particles being in the 
same particle state is vanishingly small, and the preponderating terms 
in series (24-3) are those for which no m is larger than 1. In this 
case we may correct for the indistinguishability of particle by the 
''shot-gun" procedure, used in Chapter 21, of dividing every term by 
the total number of ways in which N particles can be arranged in N 
different states. The resulting partition function 



? MB 



L 

n!,n 2 , 



1 



n x ! n 2 ! 



I ... 



exp 



(l/kT)^(M-ej) 



QUANTUM STATISTICS 



211 






n^/x-eJ/kT V J_ n 2 (p-e 2 )/kT 



= exp 



E 

- 3 



(M-Cj)AT 



?i^ 



*J 



exp e 



(M-cj)/kT 



(24-4) 



can be separated, being a product of factors ?j, one for each particle 
state j, not one for each particle. This is the partition function we 
used to obtain Eqs. (23-6) and (23-7). As was demonstrated there and 
earlier, this way of counting states results in the thermodynamics of 
a perfect gas. It results also in the Maxwell- Bo ltzmann distribution 
for the mean number of particles occupying a given particle state j. 

This last statement can quickly be shown by obtaining the grand po- 
tential Q from 5 and then, by partial differentiation by p., obtaining 
the mean number of particles in the system, 

n MB = -a m (? MB ) = -w ]2 e( M " e j)AT = - p v 

j 

N = e ^ kT Z e" e i /kT = L ft,; n r e ( ^# T (24-5) 



where N is equal to PV/kT, thus fixing the value of p=-kTln (kT/PV) 



£j /kT 



In fact ju. acts like a magnitude parameter in the grand 



canonical ensemble; its value is adjusted to make - (312/3 jll)tv e Q ua l 
to the specified value of N. The quantity f\ j , the mean value of the oc- 
cupation number for the j-th particle state for this ensemble, takes 
the place of the particle probabilities (for we can no longer ask what 
state a given particle is in). We see that the mean number of particles 
in state j, with energy e-j, is 




- £ i/ kT ) p" £ j/ kT 



(24-6) 



— e -/kT 
which is proportional to the Maxwell- Bo ltzmann factor e V 

Therefore particles that correspond to this partition function may 

be called Maxwell -Boltzmann particles (MB particles for short). No 

actual system of particles corresponds exactly to this distribution for 

all temperatures and densities. But all systems of particles approach 

this behavior in the limit of high-enough temperatures, whenever the 

classical phase-space approximation is valid. 



212 STATISTICAL MECHANICS 

Before the advent of the quantum theory the volume of phase space 
occupied by a single microstate was not known; in fact it seemed rea- 
sonable to assume that every element of phase space, no matter how 
small, represented a separate microstate. If this were the case, the 
chance that two particles would occupy the same state was of the sec- 
ond order in the volume element and could be neglected. Thus for 
classical statistical mechanics the procedure of dividing by N! was 
valid. Now we know that the magnitude of phase-space volume occu- 
pied by a microstate is finite, not infinitesimal; it is apparent that 
there can be situations in which the system points are packed closely 
enough in phase space so that two or more particles are within the 
volume that represents a single microstate; in these cases the MB 
statistics is not an accurate representation. 

Bosons and Fermions 

Actual particles are of two types. Both types are indistinguishable 
and thus, according to Eq. (20-11), have multiplicity factors g v = 1, 
rather than (NJ/nJ n 2 ! •••). A state of the system is specified by spec- 
ifying the values of the occupation numbers nj. Each such state is a 
single one; it has no meaning to try to distinguish which particle is in 
which state; all we can specify are the numbers nj in each state. 

In addition to their indistinguishability, different particles obey dif- 
ferent rules regarding the maximum value of nj. One set of particles 
can pack as many into a given particle state as the distribution will 
allow; n-j can take on all values from to°°. Such particles are called 
bosons; they are said to obey the Bose -Einstein statistics (BE for 
short). Photons and helium atoms are examples of bosons. For these 
particles the g v of Eq. (24-2) are all unity and the grand partition 
function is 



?BE= H ex P &AT)2]nj(M-6j) 
ni,n 2 , ... L j 

= y ^(M-eJ/kT y e n 2 ( M -e 2 )/kT_ = ^ 2 

n i n 2 

r [l-e^-# T ]" 1 (24-7) 

where we have used Eq. (20-2) to consolidate the factor sums -j-j. 

Here again the grand partition function separates into factors, one 
for each particle state, rather than one for each particle. We note that 
the series for the j-th factor does not converge unless \± is less than 
the corresponding energy ej. 

Here again we can calculate the grand potential and the mean num- 
ber of particles in the system of bosons, 



QUANTUM STATISTICS 213 

fiBE = kTZ;in[l-e^- f i )/kT ] = -PV 

J 

* = &; ■j-[. (, J-' i,/kr -l]" 1 (24-8, 

where m is the mean number of particles in the j-th particle state. 
In this case there is no simple equation fixing the value of /i in terms 
of N (or of PV) and T, nor is the relationship between N and Q, = -PV 
as simple as it was with Eq. (24-5). Nevertheless, knowing the allowed 
energy values €i and the temperature T, we can adjust [i so the sum 

of l/|e J - lj over all j is equal to N. This value of /i is 

then used to compute the other thermodynamic quantities. 

Note the difference between the occupation number nj for the boson 
and that of Eq. (24-5) for the MB particle. For higher states, where 
ej - \i ^> kT, the two values do not differ much, but for the lower 
states, at lower temperatures, where ej - jli is equal to or smaller 
than kT, the nj for the boson is appreciably greater than that for the 
MB particle (shall we call it a maxwellon?). Bosons tend to "con- 
dense" into their lower states, at low temperatures, more than do 
maxwellons. 

The other kind of particle encountered in nature has the idiosyn- 
crasy of refusing to occupy a state that is already occupied by another 
particle. In other words the occupation numbers nj for such particles 
can be or 1, but not greater than 1. Particles exhibiting such unso- 
cial conduct are said to obey the Pauli exclusion principle. They are 
called fermions and are said to obey Fermi-Dime statistics (FD for 
short). Electrons, protons, and other elementary particles with spin 
1/2 are fermions. For these particles g u = 1, but the sum over each 
nj omits all terms with n, > 1. Therefore, 



3 F D = Zj exp &AT')2-j nj(/i- ej) 

n i> n 2> ••• L J 



V»s 



*j 



[i+^-ejVkT] (24 . 9) 



Again the individual factors ^ are for each quantum state, rather 
than for each particle. The mean values of the occupation numbers 
can be obtained from Q as before, 

nFD =_ kT ;r ln [ 1 + e (M- ej )AT] = _ PV 
j 

N=^n j; ^[e^-^AT+i]" 1 (24-10) 

j = l 



214 



STATISTICAL MECHANICS 



where again the relation between N and PV (i.e., the equation of 
state) is not so simple as it is for maxwellons, and again there is no 
simple relationship that determines jul in terms of N; the equation for 
N must be inverted to find ju as a function of N. 

Comparing the mean number m of particles in state j for fermions 
with the hi for MB particles [Eq. (24-5)], we see that for the higher 
states, where €j - \± ^> kT, the two values are roughly equal, but for 
the lower states the n^ for fermions is appreciably smaller (for a 
given value of jll) than the n, for maxwellons. Fermions tend to stay 
away from the lower states more than do maxwellons, and thus much 
more than do bosons. In fact, fermions cannot enter a state already 
occupied by another fermion; according to the Pauli principle fij can- 
not be larger than 1. 

Comparison among the Three Statistics 

The differences between the BE, MB, and FD statistics can be 
most simply displayed by comparing the multiplicities gv of the mean 
occupation numbers fij. In each case the multiplicities are products 
of factors gj(n-j), one for each particle state j. The three sets of val- 
ues are 



gj(nj) 



1 (nj = or 1) 

1 (nj = or 1) 

^1 (nj = 0or 1) 



(nj = 2,3,...) 



= n-f < n J > « 



(nj=2,3,...) 



BE statistics 

MB statistics 

FD statistics 
(24-11) 



The 



g's are identical for 



n-j = or 1; they differ for the higher values 



of the occupation numbers. Bosons have gj = 1 for all values of m; 
they don't care how many others are in the same state. Fermions have 
gi = for nj > 1; they are completely unsocial. The approximate sta- 
tistics we call MB has values intermediate between and 1 for n-j>l; 
these particles are moderately unsocial; the gj tend toward zero as 
n^ increases. 



In terms of the energy €i of the 



of the normalizing parameter 
j is 

M)AT 



1/ 



e (e 3 



- 1 



j-th particle state and the value 
the mean number of particles in state 



BE statistics 



5ri Ve (e i-^ T 



1/ 



MB statistics 



( ( €j m)At + {\ FD statistics 



(24-12) 



QUANTUM STATISTICS 215 



For FD statistics, hj can never be larger than 1; for MB statistics 

Rj can be larger than 1 for those states with ji larger than ej; for BE 

statistics /x cannot be larger than € x [see discussion of Eq. (24-7)] 

but rij can be much larger than 1 if (e j - ti)/kT is small. 

In each case the value of ju. is determined by requiring that the sum 

of the rij's, over all values of j, be equal to the mean number N of 

particles in the system. If kT is large compared to the energy spac- 

ings ej + 1 - €j, then hj-i-i will not differ much from rij and the sum 

for N will consist of a large number of fij's, of slowly diminishing 

magnitude. Therefore much of the sum for N will be ' 'carried" by 

the n-j's for the higher states (j > 1). If, at the same time, N is 

small, then all the hj's must be small; even f^ must be less than 1. 

For this to be so, (e x - /i) must be larger than kT, so that the terms 

( e . - u)/kT 
e 3 p ' must all be considerably larger than 1. In this case the 

values of the m's, for the three statistics, are nearly equal, and we 

might as well use the intermediate MB values, since these provide us 

with a simpler set of equations for jul, S, P, C v , etc. [Eqs. (24-5)] . 

In other words, in the limit of high temperature and low density, 
both bosons and fermions behave like classical Maxwell- Bo ltzmann 
particles. For this reason, the fact that classical statistical mechan- 
ics is only an approximation did not become glaringly apparent until 
systems of relatively high density were studied at low temperatures 
(except in the case of photons, which are a special case). 

When kT is the same size as e 2 - e x or smaller, the three statis- 
tics display markedly different characteristics. For bosons \i be- 
comes very nearly equal to e 1 (/i = e 1 -6, where 5 <^C kT) so that 



5i = [>»•-">/*- 1]" 1 



kT/6 



and 



S r [e (t i- ( ' )/kT -l] J forj>l 



which is considerably smaller than n x if e 2 - e x > kT. Therefore at 
low temperatures and high densities, most of the bosons are in the 
lowest state (j = 1) and 

ni - N - V [e (6 3 " €l)/kT - 1 ]"' - N, kT -0 (24-13) 

j = 2 

which serves to determine 6, and therefore \i = e x - 6. At very low 
temperatures bosons "condense" into the ground state. The "conden- 
sation" is not necessarily one in space, as with the condensation of a 



216 STATISTICAL MECHANICS 

vapor into a liquid. The ground state may be distributed all over po- 
sition space but may be "condensed" in momentum space. This will 
be illustrated later. 

For fermions such a condensation is impossible; no more than one 
fermion can occupy a given state. As T — 0, \± must approach e-^, so 

that rij = Le J K + 1J is practically equal to 1 for j <N (since 

e J e* e J N is then very much smaller than 1) and is 

much smaller than 1 for j > N (since e J is then very large 

compared to 1 for ej >e^). Thus at low temperatures the lowest N 
particle states are completely filled with fermions (one per state) and 
the states above this "Fermi level" ejsf are devoid of particles. 

The behavior of MB particles differs from that of either bosons 
or fermions at low temperatures and high densities. The lower states 
are populated by more than one particle, in contrast to the fermions, 
but they don't condense exclusively and suddenly in just the ground 
state, as do bosons. The comparison between the number of particles 
per unit energy range, for a gas of bosons, one of fermions and one of 
maxwellons, is shown in Fig. 24-1. We see that fermions pack the 
lower N levels solidly but uniformly, that bosons tend to concentrate 
in the very lowest state, and that maxwellons are intermediate in dis- 
tribution. Because of the marked difference in behavior from that of a 
classical perfect gas, a gas of bosons or fermions at low temperatures 
and high densities is said to be degenerate. 

Distribution Functions and Fluctuations 

With indistinguishable particles there is no sense in asking the 
probability that a specific particle is in state j; all we can ask for is 
the probability f j(rij) that nj particles are in state j. These probabil- 
ities can be obtained from the distribution function of the ensemble, 
given in Eq. (23-4). For 



f N^ = fei//3) ex P 



^T nj(/i-€pAT 
j = l 



fi( n i) ,f 2 ( n 2 ) 



*A=tej/»j)e il,( ' X " e ' )AT (24-14) 

where the factor ?j of the partition function, for the j-th particle 
state, is given by Eq. (24-4), (24-7), or (24-9), depending upon whether 
the particles in the system are MB, BE, or FD particles. 

To be specific, the probability that n particles are in state j (we 
can leave the subscript off n without producing confusion here) for 
the three statistics, is 



QUANTUM STATISTICS 



217 






y = e/kT 








1 


i 








4 


/ \ T] = 


10 




1 


>; 

^ 










2 


\ 2Vy7^ -^ 

1 \ \ -«• *** """" 

\ **^^ l\ ^ 

\ ^*^ ' X 


-FD 

1 
l 
Q 

X 


—»• 


— 





/ 1 M-i L l~~-T- 


1 




1 



y = e/kT 

Fig. 24-1. Density of particles per energy range for a gas, 
according to the three statistics, for nondegen- 
erate and degenerate conditions. Dashed curve 
2(yA) 1/2 copresponds to a density of one parti- 
cle per particle state. Area under each curve 
equals r\ . See also page 233. 



218 



STATISTICAL MECHANICS 



t, v. r/\ n(/i-ej)/kT (n+l)(jLL-€i 

For bosons: fj(n) = e v ^ J" - e ^ J 



)/kT 



For MB 
particles: 


fj(n) = 


1 

^exp 


jLL-€j 

L n k T - 






r n( M -ej)AT/ 


For fermions: 


fj(») = 


o 





e CM-6-j)/kTl 



n> 1 
(24-15) 



Reference to Eqs. (24-12) shows that the mean value of n is given by 
the usual formula, 



E rf jw 



(24-16) 



With a bit of algebraic juggling, we can then express the probability 
f j(n) in terms of n and of its mean value n* (we can call it n without 
confusion here): 



fj(n) = < 



(5) n /(i+l) n+1 
[(H)"/n!] e - " 



for bosons 
for MB particles 



1-n if n = 0, = n if n = 1, =0 ifn>l for fermions 

(24-17) 



The distribution function fj(n) for bosons is a. geometric distribution. 
The ratio fj(n)/fj(n- 1) is a constant, ii/(n + 1); the chance of adding 
one more particle to state j is the same, no matter how many bosons 
are already in the state. The MB distribution is the familiar Poisson 
distribution of Eqs. (11-15) and (23-10), with ratio fj(n)/fj(n- 1) = n/n, 
which decreases as n increases. The presence of maxwellons in a 
given state discourages the addition of others, to some extent. On the 
other hand, fj(n) for FD statistics is zero for n > 1; if a fermion oc- 
cupies a given state, no other particle can join it (the Pauli principle). 

Using these expressions for fj(n) we can calculate the variance 
(Anj) 2 of the occupation number m for state j, for each kind of sta- 
tistics: 



(Ani) 2 



z> 



8,1-1,0.) 



L 

n 



n 2 fj(n) - (nj) 2 



QUANTUM STATISTICS 



219 



= ■< 



r B,(B,+l) 



^Hj(l - Sj) 



for bosons 

for MB particles 

for fermions 



(24-18) 



and, from this, obtain the fractional fluctuation An-j/rij of the occupa- 
tion numbers, 



An j/ fi j 



]/T+ ( 1 /n j ) for bosons 

]/l/nT for MB particles 

j/(l/n.j) - 1 for fermions 



(24-19) 



The fractional fluctuation is greatest for the least- occupied states 
(Rj <^C 1). As the mean occupation number increases the fluctuation de- 
creases, going to zero for fermions as fij — 1 (the degenerate state) 



and to zero for maxwellons as 



nj -*CO 



But the standard deviation An 



for bosons is never less than the mean occupancy m. We shall see 
later that the local fluctuations in intensity of thermal radiation (pho- 
tons are bosons) are always large, of the order of magnitude of the 
intensity itself, as predicted by Eq. (24-19). 



25 



Bose-Einstein 
Statistics 



The previous chapter has indicated that, as the temperature is low- 
ered or the density is increased, systems of bosons or of fermions 
enter a state of degeneracy, wherein their thermodynamic properties 
differ considerably from those of the corresponding classical system, 
subject to Maxwell- Bo ltzmann statistics. These differences are ap- 
parent even when the systems are perfect gases, where the interaction 
between particles is limited to the few collisions needed to bring the 
gas to equilibrium. Indeed, in some respects, the differences between 
the three statistics are more apparent when the systems are perfect 
gases than when they are more complex in structure. Therefore it is 
useful to return once again to the system we started to study in Sec- 
tion 2, this time to analyze in detail the differences caused by differ- 
ences in statistics. In this chapter we take up the properties of a gas 
of bosons. Two different cases will be considered; a gas of photons 
(electromagnetic radiation) and a gas of material particles, such as 
helium atoms. 

General Properties of a Boson Gas 

Using Eqs. (24-8) et seq., we compute the distribution function, 
mean occupation numbers, and thermodynamic functions for the gas of 
bosons: 

f .( n)== [x _ ;(*-*,)/«] e n ^- e J )/kT 4nj n /(n j + l) n+1 ] 

(ei-ju)AT 



n j = £ nfj(n)= [e v V 



n = 



Q BE = -PV = -kT In ^ = kT 



£ 



In 



(ji-6j)/kT 



220 



BOSE-EINSTEIN STATISTICS 221 



N = -Oft/a/i) TV = E n j = £ 
3 = 1 j = l 



(ej-ju)AT _ 1 



S = -0O/8T) y = (U - N/i - 0)/T 

OO 

U= £ epj; C v = TOS/aT) M y (25-1) 

j=l 

where \i must be less than the lowest particle energy e x in order 
that the series expansions converge. All these quantities are functions 
of the chemical potential /i. For systems in which the mean number 
of particles N is specified, the value of \i, as a_f unction of N, V, and 
T, is determined implicitly by the equation for N given above. The 
value obtained by inverting this equation is then inserted in the other 
equations, to give S, U, P, and C v as functions of N, V, and T. 

In the case of the photon gas, in equilibrium at temperature T in a 
volume V (black- body radiation), the number of photons N in volume 
V is not arbitrarily specified; it adjusts itself so that the radiation is 
in equilibrium with the constant- temperature walls of the container. 
Since, at constant T and V, the Helmholtz function F = ft + jllN comes 
to a minimal value at equilibrium [see the discussion following Eq. 
(8-10)] , if N is to be varied to reach equilibrium at constant T and 
V, we must have 

(3F/3N) TV = -|j (ft + jllN) = ju equal to zero (25-2) 

Therefore, for a photon gas at equilibrium, at constant T and V, the 
chemical potential of the photons must be zero [see the discussion fol- 
lowing Eq. (7-8)] . 

Classical Statistics of Black- Body Radiation 

At this point the disadvantages of a "logical" presentation of the 
subject become evident; a historical presentation would bring out more 
vividly the way experimental findings forced a revision of classical 
statistics. It was the work of Planck, in trying to explain the frequency 
distribution of electromagnetic radiation which first exhibited the inad- 
equacy of the Maxwell-Boltzmann statistics and pointed the way to the 
development of quantum statistics. A purely logical demonstration, 
that quantum statistics does conform with observation, leaves out the 
atmosphere of struggle which permeated the early development of 
quantum theory, struggle to find a theory that would fit the many new 
and unexpected measurements. 



222 



STATISTICAL MECHANICS 



Experimentally, the energy density of black-body radiation having 
frequency between co/27J and (co + dco)/27r was found to fit an empiri- 
cal formula 



de- 



li 



co 3 da) 



2 „3 



7T C 



■fiw/kT 



(co 2 kT/77 2 c 3 ) dco 



(•Ra) 3 A 2 c 3 )e-^ /kT 



dco 



kT »tict) 
kT <Cnu> 



where, at the time, "n was an empirical constant, adjusted to fit the 
formula to the experimental curves. Classical statistical mechanics 
could explain the low-frequency part of the curve (kT ^>1iio) but could 
not explain the high-frequency part (Fig. 25-1). 





1 1 1 


i i i 






1 


/ / 

- ; / 
/ / 








/ / 






// 
- // 
// 










i/ 







y \ \ i 


i i 



2 4 

y = WkT 



Fig. 25-1. The Planck distribution of energy density of 
black-body radiation per frequency range. 
Dashed line is Rayleigh- Jeans distribution. 



Classically, each degree of freedom of the electromagnetic radia- 
tion should possess a mean energy kT [see the discussion of Eq. 
(15-5)] , so determining the formula for de should simply involve 
finding the number of degrees of freedom of the radiation between co 
and co + dco. Since the radiation is a collection of standing waves, it 
can proceed exactly as was done in Chapter 20, in finding the number 
of standing waves in a crystal with frequencies between u)/2u and 
(co + dw)/27r [see Eqs. (20-10) and (20-11)] . In a rectangular enclosure 
of sides l x , ly, lz the allowed values of co are 



CO 



j = 7rc[(k/l x ) 2 + (m/l y ) 2 + ( n /l z ) 2 ] 



211/2 



(25-4) 



where k, m, n are integers and where c is the velocity of light. 
Each different combination of k, m, n corresponds to a different elec 



BOSE-EINSTEIN STATISTICS 223 



tromagnetic wave, a different degree of freedom or, in quantum lan- 
guage, a different quantum state j for a photon. 

By methods completely analogous to those used in Chapter 20, we 
find that the number of different degrees of freedom having allowed 
values of uh between u) and u> + do; are 

dj = (VA 2 c 3 )a> 2 du>, V = l x l y l z (25-5) 

which is twice the value given in Eq. (20-11) because light can have 
two mutually perpendicular polarizations, so there are two different 
standing waves for each set of values of k, m, and n. As mentioned 
before, this formula is valid for nonrectangular enclosures of volume 
V. 

Now, if each degree of freedom carries a mean energy kT, then 
the total energy within V, between w and a> + du>, is (kT)dj and the 
energy density of radiation with frequency between u>/2tt and 
(u> + da>)/277 is 

de = (kT/V)dj = (u> 2 kTA 2 c 3 ) du> 

which is called the Rayleigh-Jeans formula. We see that it fits the 
empirical formula (25-3) at the low-frequency end (see the dashed 
curve of Fig. 25-1) but not for high frequencies. 

As a matter of fact it is evident that the Rayleigh-Jeans formula 
cannot hold over the whole range of w from to °° , for the integral 
of de would then diverge. If this were the correct formula for the en- 
ergy density then, to reach equilibrium with its surroundings, a con- 
tainer filled with radiation would have to withdraw an infinite amount 
of energy from its surroundings; all the thermal energy in the uni- 
verse would drain off into high-frequency electromagnetic radiation. 
This outcome was dramatized by calling it the ultraviolet catastrophe . 
There is no sign of such a fate, so the Rayleigh-Jeans formula cannot 
be correct for high frequencies. In fact the empirical curve has the 
energy density de dropping down exponentially, according to the fac- 
tor e ' , when "ho; ^> kT, so that the integral of the empirical 
expression does not diverge. 

Parenthetically, a similar catastrophe cannot arise with waves in a 
crystal because a crystal is not a continuous medium; there can only 
be as many different standing waves in a crystal as there are atoms 
in the crystal; integration over u> only goes to oj m [see Eq. (26-13)] 
not to °°. In contrast, the electromagnetic field is continuous, not 
atomic, so there is no lower limit to wavelength, no upper limit to the 
frequency of its standing waves. 

A satisfactory exposition (to physicists, at any rate) would be to 
proceed from empirical formula (25-3) to the theoretical model that 



224 STATISTICAL MECHANICS 

fits it, showing that the experimental findings lead inexorably to the 
conclusion that photons obey Bose-Einstein statistics. We have not 
the space to do this; we shall show instead that assuming photons are 
bosons (with \i = 0) leads directly to the empirical formula (25-3) and 
by identifying the empirical constant "ft = h/277 with Planck's constant, 
joins the theory of black-body radiation to all the rest of quantum 
theory. 

Statistical Mechanics of a Photon Gas 

As we have already pointed out in Eq. (25-2), photons are a rather 
special kind of boson; their chemical potential is zero when they are 
in thermal equilibrium in volume V at temperature T. Formulas 
(25-1) thus simplify. For example, the mean number of photons in 

state j is rij = (e V - l) . But state j has been defined as the 
state that has frequency wj/27r, where a>j is given in Eq. (25-4) in 
terms of its quantum numbers. Since a photon of frequency coj/277 
has energy ej ="Rcoj, the mean occupation number becomes 

n r x/(e^ kT -l) (25-6) 

Since there are (V/7r 2 c 3 )ct) 2 do; = dj different photon states (differ- 
ent standing waves) with frequencies between u>/277 and (u; + du;)/27r, 
the mean number of photons in this frequency range in the container 
is 

^ - V a) 2 do; , v 

dn - — 5T ~ z — 7T7^ (25-7) 

7i 2 c 3 ■nw/kT 1 

The mean energy density de of black-body radiation in this frequency 
range is dn times the energy "fto; per photon, divided by V, which 
turns out to be identical with the empirical formula for de given in 
Eq. (25-3). Thus the assumption that photons are bosons with n= 
leads directly to agreement with observation. 

The frequency distribution of radiation given in Eq. (25-3) is called 
the Planck distribution. The energy density per unit frequency band 
increases proportional to co 2 at low frequencies; it has a maximum at 
w = 2.82(kT/Ii) [where x = 2.82 is the solution of the equation (3 - x)e x 
= 3] and it drops exponentially to zero as w increases beyond this 
maximum. Measurements have checked all these details; in fact this 
was the first way by which the value of h was determined. The mean 
number of photons, and the mean energy density, of all frequencies 
can be obtained from the following formulas: 

Jj^ =2.404; J £^L = £ = 6.494 (25-8) 



BOSE -EINSTEIN STATISTICS 225 

For example, the mean energy density is 

e(T) _ f de _ (kT) 4 ? x 3 dx = K*V 



which is the same as Eq. (7-8), of course, only now we have obtained 
an expression for Stefan's constant a in terms of k, h, and c (which 
checks with experiment). 

The grand potential ft (which also equals F, since (i = 0) is 

Q = kT /dn in (l - e^/ kT ) = J" J u > dw ln (i _ e -^/ kT ) 



= -SS/^T=-| aVT4 = -| Ve(T) = F (25 - 10) 



where we have integrated by parts. The other thermodynamic quanti- 
ties are obtained by differentiation, 

...(g) =f,VT, ,-gj) i =i.T.= i„T, 

(25-11) 

which also check with Eqs. (7-8). The mean number of photons of any 
frequency in volume V is 

- _ r Vk 3 T 3 7 x 2 dx 2.404 /kT\ 3 0.625k 3 3 

W J dn 77 2 c 3 n 3 J e*-l 7i 2 Icli/ V ~ cH V1 



o 



(25-12) 



Statistical Properties of a Photon Gas 



We saw earlier that the assumption of classical equipartition of 
energy for each degree of freedom of black-body radiation leads to 
the nonsensical conclusion that each container of radiation has an in- 
finite heat capacity. The assumption that photons are bosons, with 
jll = 0, leads to the Planck formula, rather than the Rayleigh- Jeans 
formula, and leads to the conclusion that the mean energy carried per 
degree of freedom of the thermal radiation is 



kT -ficiM « kT 



lUonj = (li Wj /(e tiw / kT - l) - I ' 



kT 
(25-13) 



226 STATISTICAL MECHANICS 

which equals kT, the classical value, only for low-frequency radiation. 
At high frequencies each photon carries so much energy that, even in 
thermal equilibrium, very few can be excited (just as with other quan- 
tized oscillators that have an energy level spacing large compared to 
kT), and the mean energy possessed by these degrees of freedom falls 

off proportionally to e . Thus, as we have seen, the mean en- 

ergy e(T) is not infinite as classical statistics had predicted. 

As was pointed out at the end of Chapter 24, the fluctuations in a 
boson gas are larger than those in a classical gas. For a photon gas 
the standard deviation Anj of the number of phot ons in a pa rticular 
state j, above and below the mean value fij, is T/ns (nj + 1) and conse- 
quently the fractional fluctuation is 



-ha>/2kT 



Anj/flj = lAhj + D/hj = e ^/^ A (25-14) 

This is also equal to the fractional fluctuation of energy density 
Ae-j/ej or of intensity Alj/L of the standing wave having frequency 
0)^/277. This quantity is always greater than unity, indicating that the 
standard deviation of the intensity of a standing wave is equal to or 
greater than its mean intensity. 

Such large fluctuations may be unusual for material gases; they 
are to be expected for standing waves. If the j-th wave is fairly stead- 
ily excited (i.e., if hj > 1, i.e., if e V < 2) then it will be oscil- 
lating more or less sinusoidally and its intensity will vary more or 
less regularly between zero and twice its mean value, which corre- 
sponds to a standard deviation roughly equal to its mean value. If, on 
the other hand, the standing wave is excited only occasionally, the si- 
nusoidal oscillation will occur only occasionally and the amplitude will 
be zero in between times. In this case the standard deviation will be 
larger than the mean. Thus Eq. (25-14) is not as anomalous as it might 
appear at first. 

The variance (Adn) 2 of the number of photons in all the standing 
waves in volume V having frequency between a>/27r and (<jj+ du>)/27r is 
the sum of the variances of the component states, 

rAHnV- (a„ yf v " 2 M - Va> 8 dw e" Ba,/kT 
(Adn) - (Anj) [-^T^-j ~ -^^T- / WkT _ \» 

and the fractional fluctuation of energy in this frequency range is 

Adn/dn = VV c 3 /Vu> 2 dw £*>/ 2kT (25-15) 

The wider the frequency band dcu and the greater the volume consid- 



BOSE-EINSTEIN STATISTICS 227 

ered, the smaller is the fractional fluctuation; including a number of 
standing waves in the sample ''smooths out" the fluctuations. 

Statistical Mechanics of a Boson Gas 

When the bosons comprising the gas are material particles, rather 
than photons, jll is not zero but is determined by the mean particle 
density. The particle energy e is not tiu) but is the kinetic energy 
p 2 /2m of the particle if m is its mass. We have already shown [see 
Eqs. (19-2) and (21-7)] that, for elementary particles in a box of 
' 'normal" size, the trans lational levels are spaced closely enough so 
that we can integrate over phase space instead of summing over par- 
ticle states. The number of particle states in an element dx dy dz dp x 
x dpy dp z = dVq dVp of phase space is g(dVq dVp/h 3 ) where g is the 
multiplicity factor caused by the particle spin. If the spin is s and no 
magnetic field is present, g = (2s + 1) different spin orientations have 
the same energy e. Therefore the sum for N of Eq. (25-1) becomes 



= (g/h3)/-./[e (£ -^ kT -l]- 1 dV q dV p 
= (gV/h 3 ) f d/3 [sin a da ( [e U " M)/kT - 1 



P 2 dp 

(25-16) 



where angles a and /3 are the spherical angles in momentum space 
of Eq. (12-1). 

We can integrate over a and /3 and, since e = (p 2 /2m) or p = ]/2me, 
we can change to € for the other integration variable, so 

N=2, gV (f) 3/2 J ** 



(e - M )AT 
o e " L 



= gV prnpkTj f 1/a (- M /kT) (25-17) 

* / \ 1 ? z m dz V* / _nx / m + *\ -x 

f m(*)=— , J e z + x n = L [ e A j^ e > 
n n=l 



X_»oo 



The series for f m converges if x is positive. However we recollect 
that with Bose-Einstein statistics ji must be less than the lowest en- 
ergy level, which is zero for gas particles. Therefore /i is negative 
and x = -( /i/kT) is positive, and the series does converge. 



228 STATISTICAL MECHANICS 

It should be pointed out that the change from summation to integra- 
tion has one defect; it leaves out the ground state e = 0. This term, in 
the sum of particle states, is the largest term of all: in the integral 
approximation it is completely left out because the density function ^JT 
goes to zero there. Ordinarily this does not matter, for there are so 
many terms in the sum for N for e small compared to kT (which are 
included in the integral) that the omission of this one term makes a 
negligible difference in the result. At low temperatures, however, bo- 
sons "condense" into this lowest state [see Eq. (24-13)] and its popu- 
lation becomes much greater than that for any other state. We shall 
find that above a limiting temperature T the ground state is no more 
densely populated than many of its neighbors and that it can then be 
neglected without damage. Below T , however, the lowest state be- 
gins to collect more than a normal share of particles, and we have to 
add an extra term to the integral for N, corresponding to the number 
of particles that have "condensed" into the zero-energy state. 

We should have mentioned this complication when we were discus- 
sing a photon gas, of course, for the integrals of (25-9) to (25-11) also 
have left out the zero- energy state. But a photon of zero energy has 
zero frequency, so this lowest energy state represents a static elec- 
tromagnetic field. We do not usually consider a static field as an as- 
semblage of photons and, furthermore, the exact number of photons 
present is not usually of interest; the measurable quantities are the 
energy density and intensity. For more-material bosons, however, 
the mean number of particles can be measured directly, so we must 
account for the excess of particles in the zero- energy state when the 
gas is degenerate. 

Thermal Properties of a Boson Gas 

The value of -\i is determined implicitly by the equation 

77 s Ni|/gV = f 1/2 (x) ; x = - jLt/kT ; 1 1 = h/"|/27rmkT 

(25-18) 

which can be inverted to obtain -/x as a function of T and 77. When 

the parameter 77 is small (low density and/or high temperature), f 1/2 

- x u. /kT 
has its limiting form e = e and 

H kT In (gV/N£|) = kT In 7?, 77 — 

which is the value for a classical, perfect gas of Eqs. (23-7). The 
computed values of x for larger values of 77 are given in Table 25-1. 

The internal energy U and the grand potential £2 of the gas can also 
be expressed as integrals, 



BOSE-EINSTEIN STATISTICS 229 

TABLE 25-1 
Functions for a Boson Gas 



V 


X 


T/T 


PV/NkT S/Nk 


2C v /3Nk 


N c /N 





oo 


oo 


1.000 


oo 


1.00 





0.1 


2.342 


8.803 


0.977 


4.784 


1.01 





1 


0.358 


1.897 


0.818 


2.403 


1.09 





2 


0.033 


1.195 


0.637 


1.625 


1.19 





2.5 


0.001 


1.030 


0.536 


1.341 


1.26 





2.612 





1.000 


0.513 


1.282 


1.28 





3 





0.912 


0.447 


1.116 


1.12 


0.129 


10 





0.409 


0.134 


0.335 


0.33 


0.739 


30 





0.196 


0.045 


0.112 


0.11 


0.913 


oo 

















1.000 


u = 


oo 

27rgV(2m/h 2 ) 3 / 2 f 




e 3/2 


de 


= | (NkT/ 




e (e - /i)AT _ 1 


V)^3/2(~^/ 


-ft = 


+ PV = -27rkTgV(2m/h 2 ) 3/2 


ft T ( 
f Je de In [l - e W 


i - e)/kT~ 




-!« 






J 









s = -(ao/aT) VjLt = |(u/t) + (-mn/t) = Nk 



x + 



5 W*) 



2 f l/2 (x) 



(25-19) 



where we have integrated the expression for -ft by parts to obtain 
(2/3)U, and where we have used the equation (d/dx)f m (x) = -f m _ ^(x) 
to obtain the expression for the entropy S. Compare these formulas 
with the ones of Eqs. (21-14) for a gas of MB particles. The term x 
here corresponds to the logarithmic terms in the formula for S, for 
example, and the two expressions for S are identical when f 3/2 = f 1 / 2 , 
i.e., when 77 — 0. 

The heat capacity C v could be computed by taking the partial of U 
with respect to T, holding N and V constant. But the independent var- 
iables here are /i, T, and V and, rather than bothering to change var- 
iables, we use the formula C v = T(3S/3T)^y. The values can be com- 
puted by numerical differentiation or by series expansion. The impor- 
tant quantities are tabulated in Table 25-1 for a range of values of the 
parameter r\ . We see that when 77 is small, PV is practically equal 
to NkT (the perfect gas law) and C v is practically equal to (3/2)Nk. 

Equations (25-17) and (25-19) show that 



230 STATISTICAL MECHANICS 

(S/Nk) = | [f 3/ ,(- M /kT)/f 1/2 (-iu/kT)] - ((i/kT) and 

(N/VT 3 / 2 ) = g(277mk/h 2 ) 3/2 f 1/2 (-^/kT) 

are functions of -ju/kT alone. In an adiabatic expansion both N and 
S remain constant; therefore in an adiabatic expansion -/i/kT and 
y T 3/2 sta y cons tant for a boson gas. We can also show that PV 5/3 is 
constant during an adiabatic expansion. These results are identical 
with Eqs. (4-12) and (6-5) for a classical perfect gas. Evidently a bo- 
son gas behaves like a perfect gas of point particles in regard to adi- 
abatic compression, although its equation of state is not that of a per- 
fect gas (Table 25-1 shows that PV/NkT diminishes as r\ increases). 
When 77 is less than unity, a first-order approximation is 

-ft - PV = |u « NkT[l - (r7/2 5/2 )] 



3 



NkT 



1 - 



n / h 2 y/ 2 " 

2gV WmkT/ . 



(25-20) 



from which we can obtain C v by differentiation of U (since the in- 
dependent variables are now N, T, and V). The boson gas exhibits a 
smaller pressure and a larger specific heat than a classical perfect 
gas, at least for moderate temperatures and densities. 

The Degenerate Boson Gas 

As the density of particles is increased and/or the temperature is 
decreased, 77 increases, x = -(i/kT decreases and the thermal prop- 
erties of the gas depart farther and farther from those of a classical 
perfect gas, until at 77 = 2.612, jll becomes zero. If 77 becomes larger 
than this, Eq. (25-17) no longer can be satisfied. For the maximum 
value of f!/ 2 (x) is 2.612, for \± = 0, and jll cannot become positive. 
The only way the additional particles can be accommodated is to put 
them into the hitherto-neglected zero- energy state mentioned several 
pages _back. 

If N is held constant and T is reduced, the condensation starts 
when 77 = 2.612, and thus when T reaches the value 

T = (h 2 /27rmk)(N/2.612gV) 2 / 3 = 3.31(:h 2 /mk)(N/gV) 2/3 (25-21) 

Any further reduction of T will force some of the particles to con- 
dense into the zero- energy state. In fact the number N x of particles 
that can stay in the upper states are those which satisfy Eq. (25-17) 
with ji = 0. 



BOSE- EINSTEIN STATISTICS 



231 



N x = 2.612gV(27rmkT/h 2 ) 3/2 = N(T/T ) 3/2 (25-22) 

and the rest, 

N c = N[l - (T/T ) 3/2 ] 

are condensed in the ground state, exerting no pressure and carrying 
no energy. Therefore, the thermodynamic functions for the gas in this 
partly condensed state are 



p V = -C2 = |u = 0.513NkT(T/T ) 3/2 = 0.086 



m 3 / 2 gV 



( kT )5/2 



or 



P = 0.086(m 3/2 g/h 3 )(kT) 5 ^ 

S = 5U/3T = 1.28Nk(T/T ) 3/2 = |c v 



(25-23) 



The pressure is independent of volume, because this is all the pres- 
sure the uncondensed particles can withstand. Further reduction of 
volume simply condenses more particles into the ground state, where 
they contribute nothing to the pressure. The heat capacity of the gas 
as a function of T has a discontinuity in slope at T , as shown in Fig. 
25.2. At high temperatures the gas is similar to an MB gas of point 
particles, with C v = (3/2)Nk. As T is diminished C v rises until, at 
T = T , it has its largest value, C v = 1.92Nk. For still smaller values 
of T, C v decreases rapidly, to become zero at T = 0. 



2nR 




T/6 v 

Fig. 25-2. Heat capacity of a gas (according to the three 
statistics) versus temperature in units of 
= (h 2 /mk)(N/gV) 2/3 . 



232 STATISTICAL MECHANICS 



The ' 'condensed particles are not condensed in position space, as 
in change of phase; they are "condensed" in momentum space, at 
p = 0, a set of N[l - (T/T ) 3/2 ] stationary particles, distributed at 
random throughout the volume V. Liquid helium II acts like a mixture 
of a condensed phase (superfluid) plus some ordinary liquid, the frac- 
tion of superfluid increasing as T is decreased. Since Hell is a liq- 
uid the theoretical model to account for its idiosyncrasies is much 
more complicated than the gas formulas we have developed here. Al- 
though the theory is not yet complete, the assumption that He 4 atoms 
are bosons does explain many of the peculiar properties of Hell. 




Fermi-Dirac 
Statistics 



Fermi-Dirac statistics is appropriate for electrons and other ele- 
mentary particles that are subject to the Pauli exclusion principle. 
The occupation numbers nj can only be zero or unity and the mean 
number of particles in state j is the hj of Eq. (24-10). In this chap- 
ter we shall work out the thermal properties of a gas of fermions, to 
compare with those of a gas of bosons and with those of a perfect gas 
of MB particles, particularly in the region of degeneracy. There are 
no FD analogues to photons, with fi= 0. 

General Properties of a Fermion Gas 

For a gas of fermions, at temperature T in a volume V, the par- 
ticle energy is € = p 2 /2m as before, and the number of allowed trans - 
lational states in the element of phase space dV q dV p is g(dV q dV p /h 3 ) 
as before (g is the spin multiplicity 2s + 1). Integrating over dVq and 
over all directions of the momentum vector, we find the number of 
states with kinetic energy between € and e+de is 27rgV(2m/h 2 ) 3/2 vTd€, 
as before. Multiplying this by m [Eq. (24-10)] gives us the mean num- 
ber of fermions with kinetic energy between e and € +d€ , 

dN = 277gV(2 m /h 2 )°/2 (e _ ^g, (26-1) 

which is to be compared with the integrand of Eq. (25-17) for the bo- 
son gas and with dN = 27rgV(2m/h 2 ) 3/2 e^ " € ^ kT VT d€ for a perfect 
gas of MB particles. 

Figure 24-1 compares plots of dN/d€ for these three statistics for 
two different degrees of degeneracy. As r\ = (N/gV)(h 2 /27rmkT) 3/2 
varies, the MB distribution changes scale, but not shape. For small 
values of 77, the values of ji= -xkT for the three cases are all nega- 
tive and do not differ much in value, nor do the three curves differ 
much in shape. In this region the MB approximation is satisfactory. 

233 



234 STATISTICAL MECHANICS 



For large values of 77 the curves differ considerably, and the val- 
ues of the chemical potential ju differ greatly for the three cases. For 
bosons, as we saw in Chapter 25, p is zero and a part of the gas has 
''condensed" into the ground state, making no contribution to the en- 
ergy or pressure of the gas, and being represented on the plot by the 
vertical arrow at y = 0. For fermions, ju is positive, and the states 
with € less than p. are practically completely filled, whereas those 
with € greater than p are nearly empty. Because of the Pauli princi- 
ple, no more than one particle can occupy a state; at low temperatures 
and /or high densities the lowest states are filled, up to the energy 
€ = p., and the states for € > p are ne arly empty, as shown by the 
curve (which is the parabolic curve 2Vy/?T for y less than -x and 
which drops to zero shortly thereafter). 

The dotted parabola 2Vy/7r corresponds to the level density dN/d€ 
= 27TgV(2m/h 2 ) 3/2 VT, corresponding to one particle per state. We see 
that the curve for MB particles rises above this for 77 large, corre- 
sponding to the fact that some of the lower levels have more than one 
particle apiece. The BE curve climbs still higher at low energies. 
The FD curve, however, has to keep below the parabola everywhere. 

The conduction electrons in a metal are the most accessible exam- 
ple of a fermion gas. In spite of the fact that these electrons are mov- 
ing through a dense lattice of ions, they behave in many respects as 
though the lattice were not present. Their energy distribution is more 
complicated than the simple curves of Fig. 24-1 and, because of the 
electric forces between them and the lattice ions, the pressure they 
exert on the container is much less than that exerted by a true gas; 
nevertheless their heat capacity, entropy, and mean energy are re- 
markably close to the Fermi-gas values. Measurements on conduction 
electrons constitute most of the verifications of the theoretical model 
to be described in this chapter. 

The Degenerate Fermion Gas 

As T approaches zero the FD distribution takes on its fully degen- 
erate form, with aj.1 states up to the N-th completely filled and all 
states beyond the N-th completely empty. In other words, the limiting 
value of 11 (call it /i ) is large and positive, and 

f 277gV(2m/h 2 ) 3 / 2 V^ d€ < e ^ /i 

dN= 1 

L0 €>m (26-2) 

where p has the value that allows the integral of dN to equal N, 

N = 2;rgV(2m/h 2 ) 3/2 J VT d€ or p = /3(N/V) 2/3 (26-3) 





vwc3680 WAB-Morse jp 11-24-61 



FERMI-DIRAC STATISTICS 235 

where j3 = (h 2 /2m)(3/47Tg) 2/3 = 5.84 x 10" 38 joule-meter 2 for electrons 
(g=2). 

Even at absolute zero most of the fermions are in motion, some of 
them moving quite rapidly. For an electron gas of density p = Nm/V 
kg per m 3 , the kinetic energy of the fastest, pi , is roughly equal to 
40p 2/3 electron volts; n /k is approximately equal to 4.5 x 10 5 p 2/3 °K. 
In other words the top of the occupied levels (the Fermi level) corre- 
sponds to the mean energy [= (3/2)kT] of a MB particle in a gas at the 
temperature 3 x 10 5 p 2/3 °K. For the conduction electrons in metals, 
where p e \ ^ 3 x 10~ 2 kg/m 3 , this corresponds to about 30,000 °K; for 
free electrons in a white-dwarf star, where p e \ > 1000, it corresponds 
to more than 3 x 10 7 degrees. Until the actual temperature of a Fermi 
gas is larger than this value, it remains degenerate. The parameter 
7] = 0.752(jLt /kT) 3/2 is a good index of the onset of degeneracy (when 
r\ > 1 there is degeneracy). 

The internal energy of the completely degenerate gas (which, like 
the boson gas, is equal to -(3/2)ft at all temperatures), is 

jP-o « «_ « 

U = / € dN = |/3N(N/V) 2/3 = f N Mo = -|fl (26-4) 

P = -O/V = f/3(N/V) 5 / 3 ; S o = 

Even at absolute zero a fermion gas exerts pressure. If electrons 
were neutral particles, their pressure would be about 2.7 x 10 7 p 5/3 
atmospheres at absolute zero. Because of the strong electrical attrac- 
tions to the ions of the crystal lattice, this pressure is largely coun- 
terbalanced by the forces holding the crystal together. 

When T is small compared to /i /k (i.e., when r? is larger than 
unity) but is not zero, a first-order correction to the formulas for 
complete degeneracy can be worked out. The results are 

U - U + ^(N/MoKkT) 2 = N Mo [| + ±7r 2 (kT/ Mo ) 2 ] 

F « U - iTr 2 (N//i )(kT) 2 = I^N^V- 2 / 3 - ^(kT) 2 NV3y 2 / 3 

S - |7r 2 Nk(kT/ Mo ) - C p * C v - |77 2 (k 2 T//3)N 1 / 3 V 2 / 3 

P « |i3(N/V) 5 / 3 + (7T 2 /6/3)(kT) 2 (N/V) 1/3 (26-5) 

These formulas verify that, as long as T is small compared with 



236 STATISTICAL MECHANICS 

ju /k, the fermion gas is degenerate, with thermal properties very dif- 
ferent from those of a classical, perfect gas. The internal energy is 
nearly constant, instead of being proportional to T; the pressure is 
inversely proportional to the 5/3 power of the volume and its depend- 
ence on T is small. 

The heat capacity of the degenerate gas is proportional to T at low 
temperatures, being considerably smaller than the classical value 3Nk 
when T is less than /x /k. Thus the C v of the conduction electrons is 
small compared to the lattice C v for metals at room temperatures. 
However, the heat capacity of the lattice of ions is proportional to T 3 
for low temperatures [see Eq. (20-16)] so that if T is made small 
enough, the linear term, for the conduction electron gas, will predom- 
inate over the cubic term for the lattice. It is found experimentally 
that, below about 3°K the heat capacity of metals is linear in T in- 
stead of cubic, as are nonconductors. This experimental fact was one 
of the first verifications of the theoretical prediction (made by Som- 
merfeld) that the conduction electrons in a metal behave like a degen- 
erate Fermi gas. 

Behavior at Intermediate Temperatures 

When T is considerably larger than /i /k, the FD gas is no longer 
degenerate; it has roughly the same properties as the MB gas. For 
example, its equation of state at these high temperatures is 



PV a NkT[l + (t7/2 5/2 )] « NkT 



1 + 3^oAT)^] (26-6) 



which differs from the corresponding result for the boson gas of Eq. 
(25-20) only by the difference in sign inside the brackets. The pres- 
sure is somewhat greater than that for a perfect gas; the effect of the 
Pauli exclusion principle is similar to that of a repulsive force be- 
tween the particles. 

For intermediate temperatures the thermodynamic properties must 
be computed numerically. Referring to Eq. (26-1), we define a param- 
eter 77, as with Eq. (25-18), 

( \ = J_ ( h2 f^ = _1_ ( M 3/2 = _2_ 7 Vu~du 
mx) gV \2vmkTJ ~ 3V^T VkT/ JT J u+x +i 

= F 1/2 (x)- e" X , x-00 (26-7) 

where x = -(/i/kT) can be considered to be a function of 77 . The other 
thermodynamic quantities, being functions of x, are therefore func- 
tions of 77, 



FERMI-DIRAC STATISTICS 237 



ft = PV = |u= NkT(F 3/ ^/77) 



3 



4 7 u 3 / 2 du 
3-/W 6 J e u+x +1 


-x 


X -» oo 


^[f(F 3 ^/F 1/2 ) + x 


; f = 


-NkT[x+(F 3/2 /F 1/2 )] 
x=-(jLt/kT) (26-8) 



where 

F3/2- 

S = Nk 



Values of some of these quantities for a few values of the density 
parameter 77 are given in Table 26-1. The onset of degeneracy cor- 
responds roughly to 77 = 1. For 77 < 1, PV is practically equal to NkT 
and C v nearly equal to (3/2)Nk; the gas is a perfect gas. When 77 > 1, 
/i is positive, PV is much larger than NkT, S goes to zero, and C v is 
much smaller than (3/2)Nk; the gas is degenerate. The curve for C v 
is shown in Fig. 25-2, in comparison with those for a perfect gas and 
a boson gas. These numbers should be compared with those of Table 
25-1, for the boson gas. 

TABLE 26-1 
Functions for a Fermion Gas 



V 


X 


kT//i 


PV/NkT 


S/Nk 


2C v /3Nk 





00 


00 


1.000 


00 


1.000 


0.01 


4.60 


17.81 


1.001 


7.1 


0.997 


0.1 


2.26 


3.841 


1.017 


4.8 


0.989 


1 


- 0.35 


0.827 


1.174 


2.6 


0.919 


10 


- 5.46 


0.178 


2.521 


0.85 


0.529 


100 


-26.0 


0.038 


10.48 


0.18 


0.145 


316 


-56.0 


0.008 


22.48 


0.09 


0.084 


00 


— 00 





00 









27 



Quantum 
Statistics for 
Complex 
Systems 



This chapter is a mixed bag. In the first part we discuss the way 
one can work out the statistical properties of systems that are more 
complex than the simple gases studied in the preceding chapters. Here 
we show why helium atoms can behave like elementary Bose-Einstein 
particles, for example, and why there has to be a symmetry factor a 
in Eq. (22-4). 

Wave Functions and Statistics 

We cannot go much further in discussing quantum statistics without 
talking about wave functions. As any text on quantum mechanics will 
state, a wave function is a solution of a Schrodinger equation; its 
square is a probability density. For a single particle of mass m in a 
potential field <p(r) the equation is H# = €^, where H is the differ- 
ential operator, 

which is applied to the wave function ^(r). The values of the energy 
factor e , for which the equation can be solved to obtain a continuous, 
single- valued, and finite ^>, are the allowed energies €j for the par- 
ticle. The square of the corresponding solutions ^dr) (the square of 
its magnitude, if ^ is complex) is equal to the probability density that 
the particle is at the point r. Therefore ^ must be normalized, 

fff |*j(r)| 2 dxdy dz = 1 (27-2) 

The mathematical theory of such equations easily proves that wave 
functions for different states i and j are orthogonal, 

///*i(r) ^j(r) dx dy dz = unless i = j (27-3) 

238 



QUANTUM STATISTICS FOR COMPLEX SYSTEMS 239 

The wave function ^j(r) embodies what we can know about the par- 
ticle in state j. According to quantum theory, we cannot know the par- 
ticle's exact position; so we cannot expect to obtain a solution of its 
classical motion by finding x, y, and z as functions of time. All we 
can expect to obtain is the probability that the particle is at r at time 
t, which is |^| 2 . The relation between classical and quantum mechan- 
ics is the relation between the operator H of Eq. (27-1) and the Ham- 
iltonian function H(q,p) of Eqs. (13-9) and (16-4). For a single parti- 
cle (the k-th one, say) 



H ^>e ) = i(pL + eV p kzM r k) 



We see that the quantum- mechanical operator is formed from the 
classical Hamiltonian by substituting (ti/i)(8/8q) for each p. For this 
reason we call the H of Eq. (27-1) a Hamiltonian operator 

The generalization to a system of N similar particles is obvious. 
If there is no interaction between the particles, the Hamiltonian for 
the system is the sum of the single-particle Hamiltonians, 

N 
H(p,q) = 2] H k (q ' p) 
k=l 
and the Schr5dinger equation for the system is 

N - 2 

H* = E*; H=^] H k; H k = --^k +( M r k) (27-4) 

k=l 

where 

v k 8x 2 + a 2 + az 2 
k J k k 

The values of E for which there is a continuous, single-valued, and 
finite solution ^ u (r lf r 2 , ... , r^) of Eq. (27-4) are the allowed values 
Ey of the energy of the system. They are, of course, the sums of the 
single-particle energies €\, one for each particle. We have used these 
facts in previous chapters [see Eqs. (19-1) and (23-1), for example]. 

A possible solution of Eq. (27-4) is a simple product of single- 
particle wave functions, 

%(r 19 r 2 , ..., r N ) = *}Jx x ) • *j 2 (r 2 ) -.*j N (r N ) 

N 
E„= T €i,- (27-5) 



L 



k=l 



240 STATISTICAL MECHANICS 

where j^ stands for the set of quantum numbers of the k-th particle. 
This would be an appropriate solution for distinguishable particles, 
for it has specified the state of each particle; state j x for particle 1, 
state j 2 for particle 2, and so on. The square of ^ v is a product of 
single-particle probability densities |^j k ( r k)| 2 tnat particle k, which 

is in the Jk"th state, is at r^. We should note that for particles with 
spin, each \I> has a separate factor which is a function of the spin co- 
ordinate, a different function for each different spin state. Thus coor- 
dinate r^ represents not only the position of the particle but also its 
spin coordinate, and the quantum numbers represented by j^ include 
the spin quantum number for the particle. 

Symmetric Wave Functions 

This product wave function, however, will not do for indistinguish- 
able particles. What is needed for them is a probability density that 
will have the same value if particle 1 is placed at point r (including 
spin) as it has if particle k is placed there. To be more precise, we 
wish a probability density ^(i^, r 2 , ... , tn)| 2 which is unchanged in 
value when we interchange the positions (and spins) of particle 1 and 
particle k, or any other pair of particles. The simple product wave 
function of Eq. (27-5) does not provide this; if ] x differs from j^, then 
interchanging r x and r^ produces a different function. 

However, other solutions of Eq. (274), having the same value E^ of 
the energy of the system as does solution (27-5), can be obtained by 
interchanging quantum numbers and particles. For example, 

^(r N ,r N _!, ...,r 1 ) = ^ JN (r 1 )-^ JN _ 1 (r 2 ) ... ^(i^) 

is another solution with energy E u . There are N! possible permuta- 
tions of N different quantum numbers among N different particle wave 
functions. If several different particles have the same quantum num- 
bers, if nj particles are in state j, for example, then there are 
(Nl/n^r^! ...) [compare with Eq. (21-9)] different wave functions ^ v 
which can be obtained from (27-5) by permuting quantum numbers and 
particles. 

Therefore a possible solution of Eq. (27-4), for the allowed energy 
E^, would be a sum of all the different product functions that can be 
formed by permuting states j among particles k. Use of Eqs. (27-2) 
and (27-3) can show that for such a sum to be normalized, it must be 
multiplied by Vn x \n 2 I---/N! . However, such details need not disturb 
us here; what is important is that this sum is a solution of Eq. (27-4) 
for the system state v with energy E p , which is unchanged in value 
when any pair of particle coordinates is interchanged (the change re- 
arranges the order of functions in the sum but does not introduce new 



QUANTUM STATISTICS FOR COMPLEX SYSTEMS 241 

terms). Therefore, its square is unchanged by such interchange and 
the wave function is an appropriate one for indistinguishable particles. 
For such a wave function it is no longer possible to talk about the 
state of a particle; all particles participate in all states; all we can 
say is that nj particles are in state j at any time. Such a wave func- 
tion is said to be symmetric to interchange of particle coordinates. 

A few examples of symmetric wave functions for two particles are 
^(rj^dg and (l/j/5)fei(ri)¥2(*a) + ^iW^fri)]; a few for three par- 
ticles are ^(r^dgtf^rg) or (l/f/3) [^(r^-foJtf.Orj) + 
^(rJ^W^dg + ^rg^W^W]; and so on- 
Antisymmetric Wave Functions 

However, since our basic requirement is that of symmetry for the 
square of ^, we have an alternative choice, that of picking a wave 
function antisymmetric with respect to interchange of particle coor- 
dinates, which changes its sign but not its magnitude when the coordi- 
nates of any pair are interchanged. The square of such a ^ also is 
unchanged by the interchange. Such an antisymmetric solution can be 
formed out of the product solutions of Eq. (27-5), but only if all parti- 
cle ^'s are for different states. If every jk differs from every other 
j, then an antisymmetric solution of Eq. (27-4), with energy E u , is 
the determinant 



*, 



VN! 



*j 2 W *j a <r.) 



*j N (r * J 



*j 2 (r N ) 



*j N (r 2 ) 



*j N < r N) 



(27-6) 



The properties of determinants are such that an interchange of any 
two columns (interchange of particle coordinates) or of any two rows 
(interc hang e of quantum numbers) changes the sign of ^ p . The proof 
that 1]/nT must be used to normalize this function is immaterial here. 
What is important is that another whole set of wave functions, satis- 
fying the requirements of particle indistinguishability, is the set of 
functions that are antisymmetric to interchange of particle coordinates, 
For this set, no state can be used more than once (a determinant with 
two rows identical is zero). 

By now it should be apparent that the two types of wave functions 
correspond to the two types of quantum statistics. Wave functions for 
a system of bosons are symmetric to interchange of particle coordi- 



242 STATISTICAL MECHANICS 

nates; any number of particles can occupy a given particle state. Wave 
functions for fermions are antisymmetric to interchange of particle 
coordinates; because of the antisymmetry, no two particles can occupy 
the same particle state (which is the Pauli exclusion principle). Both 
sets of wave functions satisfy the indistinguishability requirement— 
that the square of ^ be symmetric to interchange of particle coordi- 
nates. A simple application of quantum theory will prove that no sys- 
tem of forces which are the same for all particles will change a sym- 
metric wave function into an antisymmetric one, or vice versa. Once 
a boson, always a boson, and likewise for fermions. It is an interesting 
indication of the way that all parts of quantum theory "hang together" 
that the fact that the quantity of physical importance is the square of 
the wave function should not only allow, but indeed demand, two dif- 
ferent kinds of statistics, one for symmetric wave functions, the other 
for antisymmetric. 

We have introduced the subject of symmetry of wave functions by 
talking about a system with no interactions between particles. But this 
restriction is not binding. Suppose we have a system of N particles, 
having a classical Hamiltonian H(q,p) which includes potential ener- 
gies of interaction 0(r k i), which is symmetric to interchange of parti- 
cles (as it must be if the particles are identical in behavior). Corre- 
sponding to this H is a Hamiltonian operator H, which includes the 
interaction terms 0(r k i) and in which each p k is changed to (ft/i) 
x (3/8q k ); this operator is also symmetric to interchange of particle 
coordinates. There are then two separate sets of solutions of the gen- 
eral Schrodinger equation 

H*( ri ,r 2 , ...,r N ) = E* 
One set is symmetric to interchange of coordinates, 

*(r lt ... , rj, ... , r£, ... , r N ) = ^(r^ ... , r*, ... , r-j, ... , r N ) 
and the other is antisymmetric, 

*(ri, ••• , r k , ... , ri, ... , r N ) = - *(r 1? ... , rj, ... , r k , ... , r N ) 

When there are particle interactions, the two sets usually have differ- 
ent allowed energies E^; in any case a function of one set cannot 
change into one of the other set. 

The symmetric set represents a system of bosons and the antisym- 
metric set represents a system of fermions. It is by following through 
the requirements of symmetry of the wave functions that we can work 
out the idiosyncrasies of systems at low temperatures and high densi- 
ties. By this means we can work out the thermal properties of systems 
with strong interactions, mentioned early in Chapter 24. At high tern- 



QUANTUM STATISTICS FOR COMPLEX SYSTEMS 243 

peratures and/or low densities, Maxwell- Boltzmann statistics is ade- 
quate and we need not concern ourselves as to whether the wave func- 
tion is symmetric or antisymmetric. 

Wave Functions and Equilibrium States 

If a system is made up of N particles of one kind and M of another, 
the combined wave function of the whole is a product of an appro- 
priately symmetrized function of the N particles, times another for 
the M particles, with its proper symmetry. Thus, for a gas of N hy- 
drogen atoms, the complete wave function would be an antisymmetric 
function of all the electronic coordinates (including spin) times another 
antisymmetric for all the protons (including their spins); for both elec- 
trons and protons are fermions. 

This completely antisymmetrized wave function is appropriate for 
the state of final equilibrium, when we have waited long enough so that 
electrons have exchanged positions with other electrons, from one 
atom to the other, so that any electron is as likely to be around one 
proton as another. However, it takes a long time for electrons to in- 
terchange protons in a rarefied gas. Measurements are usually made 
in a shorter time, and correspond to an intermediate "metastable" 
equilibrium, in which atoms as a whole change places, but electrons 
do not interchange protons. A wave function for such a metastable 
equilibrium is one made up of separate hydrogen atomic wave func- 
tions, symmetrized with regard to interchanging atoms as individual 
subsystems. 

Thus it is usually more appropriate to express the wave function 
for a system of molecules in terms of each molecule as a separate 
subsystem, taking into account the effects of exchanging molecule with 
molecule as individual units, and arranging it as though the individual 
particles in one molecule never interchange with those in another. 
The total wave function is thus put together out of products of molecu- 
lar wave functions, according to the appropriate symmetry for inter- 
change of whole molecules, each molecular wave function organized 
in regard to the appropriate symmetry for exchange of like particles 
within the molecule. 

Actually the differentiation between " metastable*' and "long-term" 
equilibrium in a gas is academic in all but a few cases. By the time 
the temperature has risen beyond the boiling point of most gases, so 
that the system is a gas, both the translational and rotational motions 
of the molecules can be treated classically, as was shown in Chapters 
21 and 22, and questions of symmetry no longer play a role. Only a 
few cases exist where the relation between wave-function symmetry 
and statistics is apparent. These cases each involved puzzling dis- 
crepancies with the familiar classical statistics; their resolution con- 
stituted one of the major vindications of the new statistics. 



244 STATISTICAL MECHANICS 

One such case is for helium, gas and liquid. Normal helium (He 4 ) 
is made up of a nucleus of two protons and two neutrons, surrounded 
by two electrons. In the bound nuclear state the spins of each heavy- 
particle pair are opposed, so that the net spin of the He 4 nucleus is 
zero, as is the net spin of the electrons in the lowest electronic state. 
Since protons, neutrons, and electrons are each fermions, the com- 
bined wave function for, say, two helium atoms could be a product of 
three antisymmetric functions, one for all four electrons, another for 
four protons, and a third for the four neutrons. This wave function 
would correspond to "long-term" equilibrium, since it includes the 
possibility of interchange of neutrons, for example, between the two 
nuclei. A more realistic wave function would be formed of products 
of separate atomic wave functions. 

For example, we could assume that electrons 1 and 2 were in atom 
a, electrons 3 and 4 in atom b, and similarly with the neutrons and 
protons. The electronic wave function for atom a would then be anti- 
symmetric for interchange of electrons 1 and 2, that for atom b anti- 
symmetric for interchange of 3 and 4. We would not consider inter- 
changing 1 and 3 or 1 and 4 separately, only interchanging atom a as 
a whole with atom b. Since interchanging atoms interchanges two elec- 
trons, and two protons and two neutrons, the effect of the interchange 
would be symmetric, since (-1) 2 = +1. Therefore the system wave 
function should be symmetric for interchange of atoms; He 4 atoms 
should behave like bosons. As we mentioned at the end of Chapter 25, 
liquid He 4 does exhibit "condensation" effects of the sort predicted 
for BE particles at low temperatures. In contrast He 3 , which has only 
one neutron in its nucleus instead of two, and thus should not behave 
like a boson, does not exhibit condensation effects. 

Other molecules also should behave like bosons (those with an even 
number of elementary particles) but they become solids before they 
have a chance to exhibit the effects of degeneracy. Normal helium (He 4 ) 
is the only substance with small enough interatomic forces to be still 
fluid at temperatures low enough for boson condensation to take place; 
and once started, the condensation prevents solidification down to (pre- 
sumably) absolute zero. At pressures above 25 atm He 4 does solidify, 
and none of the condensation effects are noticeable. 

Electrons in a Metal 

Another system in which the effects of quantum statistics are notice- 
able is the metallic crystal. In the case of nonmetallic crystals the 
usual assumption is valid— that each atom acts as a unit and that there 
is not sufficient time (during most experiments) for the atoms to 
change places. Consequently the Debye theory of crystal heat capaci- 
ties does not consider the consequences of indistinguishability of the 
constituent atoms. In the case of magnetic effects, however, where 



QUANTUM STATISTICS FOR COMPLEX SYSTEMS 245 

the spins of some electrons are more strongly coupled to their coun- 
terparts on other atoms than to their neighbors in the same atom, the 
symmetry of the spin parts of these wave functions must be consid- 
ered. 

In the case of metals, the more tightly bound electrons, in the inner 
shells of the lattice ions, do not readily move from ion to ion. But the 
outer electrons can change places with their neighbors easily. Conse- 
quently the complete wave function for the metallic crystal can be 
written as a product of individual ionic wave functions, one for each 
ion in the lattice (with appropriate symmetry within each ionic factor) 
times an antisymmetric function for all the conduction electrons. The 
individual electron wave functions ^i(r^) are not very similar to the 
standing waves of free particles [see Eq. (21-5)] ; after all, they are 
traveling through a crystal lattice, not in force-free space. But they 
can exchange with their neighbors rapidly enough so that it is not pos- 
sible to specify which electron is on which ion. 

The combined wave function for the conduction electrons thus must 
be an antisymmetric combination like that of Eq. (27-6), which means 
that the usual type of FD degeneracy will take place over about the 
same range of temperatures (0 to about 1000°) as if these electrons 
were free particles in a gas. Since the thermal properties of a degen- 
erate Fermi gas depend more on the degeneracy than on the exact 
form of the wave functions, these conduction electrons behave more 
like a pure Fermi gas than anyone expected (until it was worked out 
by Sommerfeld). In their electrical properties, of course, the effects 
of their interaction with the lattice ions becomes important; but even 
here the effects of degeneracy are still controlling. 

Ortho- and Parahydrogen 

As mentioned a few pages ago, a system composed of hydrogen 
molecules (H 2 ) behaves like an MB gas as far as its trans lational en- 
ergy goes. Each molecule behaves as though it is an indivisible sub- 
system and the whole wave function can be considered to be a product 
of single -molecule wave functions, each of them being in turn products 
of trans lational, rotational, vibrational, and electronic wave functions, 
corresponding to the separation of energies of Eq. (21-1). Each of 
these molecular factors of course must have the symmetry or antisym- 
metry required by the particles composing the molecule. For exam- 
ple, because protons are fermions, interchange of the two protons 
composing the molecule must change the sign of the wave function. 
(This is not the case with the HD molecule, where the two nuclei are 
dissimilar.) 

Each proton has a position, relative to the center of mass of the 
molecule, and a spin. If the two spins are opposed, so that the total 
nuclear spin of the molecule is zero (singlet state) the spin part of the 



246 STATISTICAL MECHANICS 

nuclear wave function is antisymmetric. Therefore the space part of 
the nuclear wave function must be symmetric, in order that the prod- 
uct of both will change sign when the two are interchanged. On the 
other hand if the spins are parallel, so that the total nuclear spin is 1 
(triplet state) the spin factor is symmetric and the space part of the 
nuclear wave function must be antisymmetric. Now the factor in the 
molecular wave function which achieves the interchange of position of 
the nuclei is the rotational factor, which is a spherical harmonic of 
the angles denoting the direction of the axis through the nuclei. Inter- 
changing the positions of the nuclei corresponds to rotating the axis 
through 180°. 

As mentioned in connection with Eq. (22-1) the allowed values of 
the square of the angular momentum of the molecule are 1i 2 £U + 1), 
where I is the order of the spherical harmonic in the corresponding 
rotational wave function. It turns out that those spherical harmonics 
with even values of i are symmetric with respect to reversing the 
direction of the molecular axis; those with odd values of £ change sign 
when the axis is rotated 180°. The upshot of all this is that those H 2 
molecules which are in the singlet nuclear state, with opposed spins, 
can only have even values of I and those in the triplet nuclear state 
can have only odd values of i. 

The nuclear spins are well protected from outside influences, and 
it is an exceedingly rare collision which disturbs them enough for the 
spin of one molecule to affect that of another. Therefore, the molecule 
that is in the singlet nuclear state stays for days in the singlet state 
and is practically a different molecule from one in a triplet nuclear 
state. Only after a long time (or in the presence of special catalysts) 
do the spins exchange from molecule to molecule, so that the whole 
system finally comes to over-all, long-term equilibrium. 

The two kinds of H 2 are permanent enough to be given different 
names; the singlet type, with even values of £, is called parahydrogen. 
Its rotational partition function is 

Z rot = -?. (2£ + l)exp[-6» rot jeU + l)/T] (27-7) 



£=0,2,4, .. 



instead of the sum over all values of 1 as in Eq. (22-1), which is valid 
for nonsymmetrical molecules. The triplet kind is called orthohydro- 
gen. Its partition function is 

z °t = £ (21 +l)exp[-0 rot 1(1 + 1)/T] (27-8) 

1 = 1,3,5,... 

Since the multiplicity of the singlet state (parahydrogen) is 1 and that 
of the triplet (ortho hydrogen) is 3, hydrogen gas behaves as though it 
were a mixture of one part, (1/4)N, of parahydrogen to three parts, 
(3/4)N, of ortho hydrogen. 



QUANTUM STATISTICS FOR COMPLEX SYSTEMS 247 



When the heat capacity of this mixture is measured in the usual 
way, the two kinds of hydrogen act like separate substances and the 
heat capacity C v = T(3 2 F/8T 2 ) V is a sum of two terms, 



,rot 



NkT^ ;;-~r Ill ?:■" , + t " : ;':„, in v; 



4 3T 2 



rot 4 ST 2 



) 
rot/ 



and not the single term 

C rot = NkT(^ln Z t ) 
v \3T 2 rot/ 



(27-9) 



(27-10) 



which was used in Chapter 22, was plotted in Fig. 22-1, and is valid 
for molecules with nonidentical nuclei. There is no difference between 
the two formulas in the classical limit of T > ro t> where each par- 
tial is equal to unity; the result is NkT in both cases. However, at 
low temperatures there is a difference between (27-9) and (27-10), 
which is plotted in Fig. 27-1 [the dotted curve is for Eq. (27-10), the 
solid one for (27-9), which is the one predicted for H 2 )] . The circles 




100° T 200° 300° 

Fig. 27-1. The rotational part of the heat capacity of hy- 
drogen gas (H 2 ). It differs from the dashed curve 
(identical with Fig. 22-1) because the two nuclei 
are indistinguishable protons. 

are the measured values, which definitely check with the assumption 
that para- and orthohydrogen act like separate substances. 

If the heat capacities are measured very slowly, in the presence of 
a catalyst to speed the exchange of nuclear spin, then the two varieties 
of H 2 do not behave like separate substances and the appropriate par- 
tition function is 



248 STATISTICAL MECHANICS 

Zrot = -| Tj (2A+l)exp[-e rot l(l + l)/T] 
I even 

+ | J2 (2l+l)exp[-6 rot i(l + l)/T] (27-11) 

j^odd 

and the heat capacity is obtained by inserting this in Eq. (27-10). This 
results in a different curve, which is not obtained experimentally with- 
out great difficulty. The metastable equilibrium, in which the two 
kinds of hydrogen behave as though they were different substances, is 
the usual experimental situation. 

The nonsymmetric HD molecule, of course, has no ortho or para 
varieties; the two nuclei are not identical. The curve of Fig. 22-1, or 
the dotted curve of Fig. 27-1 (with a different ro t of course) is the 
appropriate one here, and the one that corresponds to the measure- 
ments. 

On the other hand the D 2 molecule again has identical nuclei. The 
deuterium nucleus has one proton and one neutron and a spin of 1. Ex- 
changing nuclei exchanges pairs of fermions, thus the wave function 
should be symmetric to exchange of nuclei. The molecule with anti- 
symmetric spin factor (paradeuterium) is also antisymmetric in re- 
gard to spatial exchange (i = 1,3,5, ...) and the one with symmetric 
spin factor (orthodeuterium) is also symmetric in the spherical har- 
monic factor (i= 0,2,4, ...). There are twice as many ortho states as 
para states. Thus one can compute a still different curve for the C v 
for D 2 gas, which again checks with experiment. Because of the deep- 
lying requirements of symmetry of the wave function, the spin orien- 
tations of the inner nuclei, which can only be measured or influenced 
with the greatest of difficulty, reach out to affect the gross thermal 
behavior of the gas. These symmetry effects also correctly predict 
the factors a in Eq. (22-4) for polyatomic molecules. 

Thus the effects of quantum statistics turn up in odd corners of the 
field, at low temperatures and for substances a part of which can stay 
gaslike to low-enough temperatures for the effects of degeneracy to 
become evident. For the great majority of substances and over the 
majority of the range of temperature and density, classical statistical 
mechanics is valid, and the calculations using the canonical ensemble 
of Chapters 19 through 22 quite accurately portray the observed re- 
sults. The situations where quantum statistics must be used to achieve 
concordance with experiment are in the minority (luckily; otherwise 
our computational difficulties would be much greater). But, when they 
are all considered, these exceptional situations add up to exhibit an 
impressive demonstration of the fundamental correctness of quantum 
statistics. 



References 



The texts listed below have been found useful to the writer of this 
volume. They represent alternative approaches to various subjects 
treated here, or more complete discussions of the material. 

E, Fermi, "Thermodynamics, " Prentice- Hall, New York, 1937, is 
a short, readable discussion of the basic concepts. 

W. P. Allis and M. A. Herlin, "Thermodynamics and Statistical 
Mechanics," McGraw-Hill, New York, 1952, presents some alterna- 
tive approaches. 

F. W. Sears, "Introduction to Thermodynamics, the Kinetic Theory 
of Gases, and Statistical Mechanics, " Addison- Wesley, Reading, Mass., 
1953, also provides some other points of view. 

H. B. Callen, "Thermodynamics, " Wiley, New York, 1960, is a 
"postulational" development of the subject. 

Charles Kittel, "Elementary Statistical Physics," Wiley, New York, 
1958, contains short dissertations on a number of aspects of thermo- 
dynamics and statistical mechanics. 

J. C. Slater, "Introduction to Chemical Physics," McGraw-Hill, 
New York, 1939, has a more complete treatment of the application of 
statistical mechanics to physical chemistry. 

L. D. Landau and E. M. Lifchitz, "Statistical Physics," Addison- 
Wesley, Reading, Mass., 1958, includes a thorough discussion of the 
quantum aspects of statistical mechanics. 



249 



Problems 



1. Suppose you are given a tank of gas that obeys the equation of 
state (3-4), a calibrated container that varies (slightly) in volume, in 
an unknown way, with temperature, and an accurate method of meas- 
uring pressure at any temperature. How would you devise a thermom- 
eter that measures the perfect gas scale of temperature ? Could you 
also determine the constants a and b in the equation of state of the 
gas? 

2. The coefficient of thermal expansion (3 and the compressibility 
k of a substance are defined in terms of partial derivatives 

is - (i/v)(av/8T) p k = -(i/v)(av/ap) T 

(a) Show that (8/3/3 P )t = ~{dh/dT)p and that (j3/k) = (3P/8T) V for 
any substance. 

(b) It is found experimentally that, for a given gas, 

RV 2 (V-nb) V 2 (V - nb) 2 

P RTV 3 - 2an(V - nb) 2 K nRTV 3 - 2an 2 (V - nb) 2 

where a and b are constants, and also that the gas behaves like a 
perfect gas for large values of T and V. Find the equation of state 
of the gas. 

3. A gas obeys equation of state (3-4). Show that for just one crit- 
ical state, specified by the values T c and V c , both (dP/dV)^ and 
(8 2 P/3V 2 )t are zero. Write the equation of state giving P/P c in 
terms of T/T c and V/V c . Plot three curves for P/P c as a function 
of V/V c , one for T = (1/2)T C , one for T = T c , and one for T = 2T C . 
What happens physically when the equation indicates three allowed val- 
ues of V for a single P and T? 

4. Suppose that all the atoms in a gas are moving with the same 
speed v, but that their directions of motion are at random. 

(a) Average over directions of incidence to compute the mean num- 

250 



PROBLEMS 



251 



ber of atoms striking an element of wall area dA per second (in 
terms of N, V, v, and dA) and the mean momentum per second im- 
parted to dA. 

(b) Suppose, instead, that the number of atoms having speeds be- 
tween v and v + dv is 2N[1 - (v/v m )] (dv/v m ) for v<v m (the direc- 
tions still at random). Calculate for this case the mean number per 
second striking dA and the mean momentum imparted per second, in 
terms of N, V, v m , and dA. Show that Eq. (2-4) holds for both of 
these cases. 

5. A gas with Van der Waals' equation of state (3-4) has an inter- 
nal energy 



U 



nRT - (an 2 /V) + U ( 



Compute Cy and Cp as functions of V and T and compute T as a 
function of V for an adiabatic expansion. 

6. An ideal gas for which Cy = (5/2)nR is taken from point a to 
point b in Fig. P-6, along three paths, acb, adb, and ab, where P 2 
= 2P X , V 2 - 2Vi. 

J 





c X 


b 
















^w 




d 


^-T 2 




a 


^ > 








^ 




.T, 



Fig. P-6 



(a) Compute the heat supplied to the gas, in terms of N, R, and T l9 
in each of the three processes. 

(b) What is the heat capacity of the gas, in terms of R, for the 
process ab? 

7. A paramagnetic solid, obeying Eqs. (3-6), (3-8), and (4-8) and 
having a heat capacity C v = nAT 3 , is magnetized isothermally (at con- 
stant volume) at temperature T from am = to a maximum magnetic 
field of 3C m . How much heat must be lost? It is then demagnetized 



adiabatically (at constant volume) to 9TC 

temperature T x of the solid, in terms of JC m , T , A, and D 

you explain away the fact that, if JC m is large enough or T small 



again. Compute the final 
How do 



252 



THERMAL PHYSICS 



enough, the formula you have obtained predicts that T x should be neg- 
ative ? 

8. Derive equations for (3V/3T)p and (3V/8P)x analogous to Eqs. 
(4-4) and (4-6). Obtain an expression for (8H/8P)x- For a perfect gas, 
with Cp = (5/2)nRT and (Cp - Cy)/(aV/8T)p = P, integrate the par- 
tials of H to obtain the enthalpy. 

9. Figure P-9 shows a thermally isolated cylinder, divided into 
two parts by a thermally insulating, frictionless piston. Each side 
contains n moles of a perfect gas of point atoms. Initially both sides 
have the same temperature; heat is then supplied slowly and reversi- 
bly to the left side until its pressure has increased to (243P /32). 




(a) How much work was done on the gas on the right ? 

(b) What is the final temperature on the right ? 

(c) What is the final temperature on the left ? 

(d) How much heat was supplied to the gas on the left ? 

10. The ratio Cp/Cy for air is equal to 1.4. Air is compressed 
from room temperature and pressure, in a diesel engine compression, 
to 1/15 of its original volume. Assuming the compression is adiabatic, 
what is the final temperature of the gas ? 

11. Compute AQ 12 and aQ 43 , for a Carnot's cycle using a perfect 
gas of point particles, in terms of nR and T^ and T c . Using the per- 
fect-gas scale of temperature, show that AW 23 = -AW 41 . Show that the 
efficiency of the cycle is (Th - T c )/Th and thus prove that the perfect- 
gas temperature scale coincides with the thermodynamic scale of Eq. 
(5-5). 

12. A magnetic material, satisfying Eqs. (4-8) and (3-8) has a con- 
stant heat capacity, Cy^ = C. It is carried around a Carnot cycle 
shown in Fig. P-12, gn being reduced isothermally from 9fR 1 to 3TC 2 at 
T^, then reduced adiabatically from 9Ti 2 to 9TC 3 , when it has tempera- 
ture T c , then remagnetized isothermally at T c to 3Ti 4 , and thence 
adiabatically back to T^ and am 1 . 

(a) Express 2HX 3 in terms of T^, T c , D, C, and 9TC. 2 and relate 2TC 4 
similarly with m x . 

(b) How much heat is absorbed in process 1-2? How much given 
off in process 3-4? 

(c) How much magnetic energy dW is given up by the material in 
each of the four processes ? Show that dW 23 = -dW 41 . 



PROBLEMS 



253 



(d) Show that the efficiency of the cycle heat- magnetic energy is 
(T h - T c )/T h . 



m 



Fig. P-12 



13. When a mole of liquid is evaporated at constant temperature T 
and vapor pressure P V (T), the heat absorbed in the process is called 
the latent heat L v of vaporization. A Carnot cycle is run as shown in 
Fig. P-13, going isothermally from 1 to 2, evaporating n moles of 
liquid and changing volume from V x to V 2 , then cooling adiabatically 
y by evaporating an additional small amount of 

T - dT, from V 3 to V 4 , and 



dT, P V - dP 



to T 

liquid, then recondensing the n moles at 

thence adiabatically to Py, T again. 




P 

"P 


1 T 2 


> v -dp v 


4 T - dT 3 

1 1 



Fig. P-13 



(a) Show that V s 



Vi = V g 



V^ where Vg is the volume occupied 



by n moles of the vapor and V£ the volume of n moles of the liquid 
and that if dT is small enough, V 3 - V 4 * V 2 - V x . 

(b) Find the efficiency of the cycle, in terms of dPy, Vg - V^, 
and nLy. 

(c) If this cycle is to have the same efficiency as any Carnot cycle, 
this efficiency must be equal to (T h - T c )/Th = dT/T. Equating the 
two expressions for efficiency, obtain an equation for the rate of 
change dPy/dT of the vapor pressure with temperature in terms of 
Vg - Vg, n, Ly, and T. 

14. Work out the Carnot cycle with a gas of photons, obeying Eqs. 
(7-8). 



254 



THERMAL PHYSICS 



15. An ideal gas, satisfying Eqs. (4-7) and (4-12) is carried around 
the cycle shown in Fig. P-15; 1-2 is at constant volume, 2-3 is adia- 
batic, 3-1 is at constant pressure, V 3 is 8V lf and n moles of the gas 
are used. 





5 






2 


- 1 


k ^^^ 




1 




1 



v — > 
Fig. P-15 

(a) What is the heat input, the heat output, and the efficiency of the 
cycle, in terms of P 1? V\, n, and R? 

(b) Compare this efficiency with the efficiency of a Carnot cycle 
operating between the same extremes of temperature. 

16. An amount of perfect gas of one kind is in the volume CjV of 
Fig. P-16 at temperature T and pressure P, separated by an im- 
pervious diaphragm D from a perfect gas of another kind, in volume 
C 2 V and at the same pressure and temperature (C x + C 2 = 1). The vol- 
ume V is isolated thermally. What is the entropy of the combination? 




Ci + C 2 = l 
Fig. P-16 



Diaphragm D is then ruptured and the two gases mix spontaneously, 
ending at temperature T, partial pressure CjP of the first gas, C 2 P 
of the second gas, all in volume V. What is the entropy now? Devise 
a pair of processes, using semipermeable membranes (one of which 
will pass gas 1 but not 2, the other which will pass 2 but not 1), which 
will take the system from the initial to the final state reversibly and 
thus verify the change in entropy. What is the situation if gas 1 is the 
same as gas 2? 

17. Two identical solids, each of heat capacity Cy (independent of 
T), one at temperature T + t, the other at temperature T - 1, may 



PROBLEMS 255 

be brought to common temperature T by two different processes: 

(a) The two bodies are placed in thermal contact, insulated ther- 
mally from the rest of the universe and allowed to reach T sponta- 
neously. What is the change of entropy of the bodies and of the uni- 
verse caused by this process? 

(b) First a reversible heat engine, with infinitesimal cycles, is 
operated between the two bodies, extracting work and eventually bring- 
ing the two to comm on temp erature. Show that this common tempera- 
ture is not T, but VT 2 - t 2 . What is the work produced and what is 
the entropy change in this part of process b? Then heat is added re- 
versibly to bring the temperature of the two bodies to T. What is the 
entropy change of the bodies during the whole of reversible process b? 
What is the change in entropy of the universe ? 

18. Show that T dS = C v dT + T(aP/3T) V dV, and T dS = 
C V (8T/8P) V dP + C p 0T/3V)p dV. 

19. A paramagnetic solid, obeying Eqs. (3-6), (3-8), and (4-8), has 
a heat capacity C p q(T) (at zero magnetic field) dependent solely on 
temperature. First, show that 

T dS = C p dT - T(8V/3T)p dP + T(d3tl/dT)- x d3C 

and, analogous to Eq. (8-13), that (aC P5c /aac) T p = T(3 2 9fTl/8T 2 ) ac . From 
this, show that 



S = f (Cp()/ T ) dT - -nD(3C/T) 2 - p V P 



and thence obtain G and U as functions of T, P, and 5C. Obtain S as 
a function of T, V, and arc and thence obtain F and U as functions of 
T, V, and 971. 

20. A gas obeys the Van der Waals' equation of state (3-4) and has 
heat capacity at constant volume Cy = (3/2)nR. Write the equation of 
state in terms of the quantities t= T/T c , p = P/P c , and v = V/V c , 
where T c = 8a/27Rb, P c = a/27b 2 , V c = 3nb (see Problem '3). Calcu- 
late T C S/P C V C in terms of t and v, likewise F/P C V C and G/P C V C . 
For t = 1/2 plot p as a function of v from the equation of state. Then, 
for the same value of t, calculate and plot G/P C V C as a function of p, 
by graphically finding v for each value of p from the plot, and then 
computing G/P C V C for this value of v (remember that for some val- 
ues of p there are three allowed values of v). The curve for G/P C V C 
crosses itself. What is the physical significance of this? 

21. Ten kilograms of water at a temperature of 20 °C is converted 
to superheated steam at 250° and at the constant pressure of 1 atm. 
Compute the change of entropy of the water. Cp (water) = 4180 joules/ 
kg-deg. L v (at 100°C) = 22.6 x 10 5 joules/kg. Cp (steam) = 1670 + 
0.494T + 186 x 10 6 /T 2 joules/kg-deg. 



256 THERMAL PHYSICS 

22. Assume that near the triple point the latent heats L m and L v 
are independent of P, that the vapor has the equation of state of a per- 
fect gas, that the volume of a mole of solid or liquid is negligible com- 
pared to its vapor volume, and that the difference V4 - V s is positive, 
independent of P or T and is small compared to nL m /T. Using these 
assumptions, integrate the three Clausius-Clapeyron equations for the 
vapor-pressure, sublimation-pressure and melting-point curves. 
Sketch the form of these curves on the P-V plane. 

23. The heat of fusion of ice at its normal melting point is 3.3 x 
10 5 joules/kg and the specific volume of ice is greater than the spe- 
cific volume of water at this point by 9 x 10~ 5 m 3 /kg. The value of 
(l/V)(3V/3T)p for ice is 16 x 10" 5 per degree and its value of -(1/V) 
x (8V/9P) T is 12 x 10- 11 (m 2 /newton). 

(a) Ice at -2°C and at atmospheric pressure is compressed iso- 
thermally. Find the pressure at which the ice starts to melt. 

(b) Ice at -2°C and atmospheric pressure is kept in a container at 
constant volume and the temperature is gradually increased. Find the 
temperature at which the ice begins to melt. 

24. Considering that all constituents of a chemical reaction are per- 
fect gases obeying Eqs. (8-21), write out the expressions for In K, the 
logarithm of the equilibrium constant, in terms of T, T , of the con- 
tents of integration g^Q and s^q per mole and of the v{s. Show that 
the derivative of In K^ with respect to T at constant P is equal to 
AH/RT 2 , where AH is the change in enthalpy (i.e., the heat evolved) 
when v\ moles of substance M^ disappears in the reaction (a negative 
value of i>i means the substance appears, i.e., is a product). This re- 
lation between [dln(K)/dT]p and AH is known as the van't Hoff equa- 
tion. 

25. The probability that a certain trial (throw of a die or drop of a 
bomb, for example) is a success is p for every trial. Show that the 
probability that m successes are achieved in n trials is 

P m (n) = — 77 — : — 77 p (1 -p) (this is the binomial 

m m!(n-m)! ,. . ., ,. v 

distribution) 

Find the average number in of successes in n trials, the mean-square 
(m 2 ) and the standard deviation Am of successes in n trials. 

26. The probability of finding n photons of a given frequency in an 
enclosure that is in thermal equilibrium with its walls is P n = 

(1 - a)a n f where a(0 <a < 1) is a function of temperature, volume of 
the enclosure, and the frequency of the photons. What is the mean 
number fi of photons of this frequency? What is the fractional devia- 
tion An/h of this number from the mean? Express this fractional de- 
viation in terms of h, the mean number. For what limiting value of n 
does the fractional deviation tend to zero? 



PROBLEMS 257 



27. A molecule in a gas collides from time to time with another 
molecule. These collisions are at random in time, with an average 
interval r, the mean free time. Show that, starting at time t = (not 
an instant of collision) the probability that the molecule has not yet 
had its next collision at time t is e~V T . What is the expected time 
to this next collision? Show also that the probability that its previous 
collision (the last one it had before time t = 0) was earlier than time 
-T is e~ T / T . What is the mean time of this previous collision? Does 
this mean that the average time interval between collisions is 2t ? 
Explain the paradox. 

28. In interstellar space, the preponderant material is atomic hy- 
drogen, the mean density being about 1 hydrogen atom per cc. What 
is the probability of finding no atom in a given cc ? Of finding 3 at- 
oms? How many H atoms cross into a given cc, through one of its 
1-cm 2 faces, per second, if the temperature is 1°K? If T is 1000°K? 

29. A closed furnace F (Fig. P-29) in an evacuated chamber con- 
tains sodium vapor heated to 1000 °K. What is the mean speed v of the 
vapor atoms? At t = an aperture is opened in the wall of the furnace, 
allowing a collimated stream of atoms to shoot out into the vacuum. 



& 



Fig. P-29 

The aperture is closed again at t = r. A distance L from the aperture, 
a plate is moving with velocity u, perpendicular to the atom stream, 
so that the stream deposits its sodium atoms along a line on the plate; 
the position of the stream that strikes at time t hits the line at a point 
a distance X = ut from its beginning. Obtain a formula for the density 
of deposition of sodium as function of X along the line assuming that 
r <C (L/v), and find the value of X for which this density is maximum. 
Sketch a curve of the density versus X. 

30. Most conduction electrons in a metal are kept from leaving 
the metal by a sudden rise in electric potential energy, at the surface 
of the metal, of an amount eW , where W is the electric potential 
difference between the inside and the outside of the metal. Show that 
if the conduction electrons inside the metal are assumed to have a 
Maxwell distribution of velocity, there will be a thermionic emission 
current of electrons from the surface of a metal at temperature T 
that is proportional to /T exp (-eW /kT). What is the velocity distri- 
bution of these electrons just outside the surface ? [The measured 



258 



THERMAL PHYSICS 



thermionic current is proportional to T 2 exp (-e^/kT), where 
<p< W ; see Problem 63.] 



0) 

c 
© 

o 

a 



eW f 



exterior 



metal surface 



interior 

Fig. P-30 

31. A gas of molecules with a Maxwell distribution of velocity at 
temperature T is in a container having a piston of area A, which is 
moving outward with a velocity u (small compared to v), expanding 
the gas adiabatically (Fig. P-31). Show that, because of the motion 
of the piston, each molecule that strikes the piston with velocity v at 




Fig. P-31 

an angle of incidence 6 rebounds with a loss of kinetic energy of an 
amount 2mvu cos 6 (u <v). Show that consequently the gas loses en- 
ergy, per second, by an amount -dU = PAu = P dV, where dV is the 
increase in volume of the container per second. 

32. Helium atoms have a collision cross section approximately 
equal to 2 x 10~ 16 cm 2 . In helium gas at standard conditions (1 atm 
pressure, 0°C), assuming a Maxwell distribution, what is the mean 
speed of the atoms? What is their mean distance apart? What is the 
mean free path? The mean free time? 

33. A gas is confined between two parallel plates, one moving with 
respect to the other, so that there is a flow shear in the gas, the mean 
gas velocity a distance y from the stationary plate being /3y in the x 
direction (Fig. P-33). Show that the zero-order velocity distribution 
in the gas is 



PROBLEMS 259 



^^^^^ ^^ v =/3L 



I I 



v = £y 



^^^^^^^^^^^ ^^ v = 

x — ^ 

Fig. P-33 

(l/277mkT) 3/2 exp{-(l/2mkT)[(p x - m/3y) 2 + p 2 , + p|]} = 

= f (p x -m/3y, p y , p z ) 

Use Eq. (14-3) to compute f to the first order of approximation. Show 
that the mean rate of transport of x momentum across a unit area 
perpendicular to the y axis is (n/V)(t c /3/m)J ff p 2 f dp x dp y dp z 
= (N/V)t c j3kT, which equals the viscous drag of the gas per unit area 
of the plate, which equals the gas viscosity 77 times /3, the velocity 
gradient. Express 77 in terms of T and A (mean free path) and show 
that the diffusion constant D of Eq. (14-8) is equal to (77/p), where p 
is thejiensity of the gas. 

34. Use the Maxwell- Bo Itzmann distribution to show that, if the 
atmosphere is at uniform temperature, the density p and pressure P 
a distance z above the ground is exp(-mgz/kT) times p and P , re- 
spectively (where g is the acceleration of gravity). Express p and 

P in terms of g, T, and M a , the total mass of gas above a unit ground 
area. Obtain this same expression from the perfect gas law, P = pkT/m 
and the equation dP = -pg dz giving the fall- off of pressure with height 
(assuming T is constant). Find the corresponding expressions for p 
and P in terms of z if the temperature varies with pressure in the 
way an adiabatic expansion does, i.e., P = (p/C)>, T = (Dp)>~ *, where 
y =(C p /C v )[seeEqs. (4-12)]. 

35. A tube of length L = 2 m and of cross section A = 10" 4 m 2 con- 
tains C0 2 at normal conditions of pressure and temperature (under 
these conditions the diffusion constant D for C0 2 is about 10 m 2 /sec). 
Half the C0 2 contains radioactive carbon, initially at full concentra- 
tion at the left-hand end, zero concentration at the right-hand end, the 
concentration varying linearly in between. What is the value of t c for 
C0 2 under these conditions ? Initially, how many radioactive mole- 
cules per second cross the mid-point cross section from left to right? 
[Use Eqs. (14-7).] How many cross from right to left? Compute the 
difference and show that it checks with the net flow, calculated from 
the diffusion equation (net flow) = -D(dn/dx). 

36. The collision cross section of an air molecule for an electron 
is about 10" 19 m 2 . At what pressure will 90 per cent of the electrons 
emitted from a cathode reach an anode 20 cm away? 



260 THERMAL PHYSICS 

37. A gas-filled tube is whirled about one end with an angular ve- 
locity co. Find the expression for the equilibrium density of the gas 
as a function of the distance r from the end of the tube. 

38. A vessel containing air at standard conditions is radiated with 
x-rays, so that 0.01 per cent of its molecules are ionized. A uniform 
electric field of 10 4 volts/meter is applied. What is the initial net flux 
of electrons ? Of ions? (See Problem 36 for the cross section for 
electrons; the cross section for the ions is four times this. Why?) 
What is the ratio between drift velocity and mean thermal velocity for 
the electrons ? For the ions ? 

39. A solid cylinder of mass M is suspended from its center by a 
fine elastic fiber so that its axis is vertical. A rotation of the cylinder 
through an angle 6 from equilibrium requires a torque KB to twist 
the fiber. When suspended in a gas-filled container at temperature T, 
the cylinder exhibits rotational fluctuation due to Brownian motion. 
What is the standard deviation (A6) of the amplitude of rotation and 
what is the standard deviation (Aco) of its angular velocity? What 
would these values be if the container were evacuated? 

40. Observations of the Brownian motion of a spherical particle of 
radius 4 x 10~ 7 m in water, at T = 300°K, and of viscosity 10~ 3 newton- 
sec/m 2 were made every 2 sec. The displacements in the x direction, 
x(t) - x(t- 2), were recorded and were tabulated as shown in Table 
P-40. 

TABLE P-40 



<5 = x(t) - x(t 


- 2) 


No. 


times this 


in units of 10" 


• 6 m 


was 


» observed 


Less than ±0.5 






111 


Between 0.5 and 1.5 




87 


-0.5 


-1.5 




95 


1.5 


2.5 




47 


-1.5 


-2.5 




32 


2.5 


3.5 




8 


-2.5 


-3.5 




15 


3.5 


4.5 




3 


-3.5 


-4.5 




2 


4.5 


5.5 







-4.5 


-5.5 




1 


Larger than ± 


5.5 








Compute the mean value of 6 and its standard deviation. How close is 
this distribution to a normal distribution [Eq. (11-17)] ? Use Eq. 
(15-11) to compute Avogadro's number from the data, assuming 



R = kN is known. 



PROBLEMS 261 

41. Show that if the Hamiltonian energy of a molecule depends on a 
generalized coordinate q or momentum p in such a way that H — °° 
as p or q — ±°°, it is possible to generalize the theorem on equipar- 
tition of energy to 

-If) 41?) '« 

M 'av 'av 

Verify that this reduces to ordinary equipartition when H has a quad- 
ratic dependence on q or p. If H has the relativistic dependence on 
the momentum 

H = c V(Px + Py + Pz) + m2c2 
show that 

(c a p£/H) av = - = (c 2 p|/H) 2 av = kT 

42. A harmonic oscillator has a Hamiltonian energy H related to 
its momentum p and displacement q by the equation 

p 2 + (mcuq) 2 = 2mH 

When H = E, a constant energy, sketch the path of the system point in 
two-dimensional phase space. What volume of phase space does it en- 
close? In the case of N similar harmonic oscillators, which have the 
total energy E given by 

N N 

Vp]+V (Mu>qi) 2 = 2ME 

j=l ' j=l 

with additional coupling terms, too small to be included, but large 
enough to ensure equipartition of energy, what is the nature of the path 
traversed by the system point ? Show that the volume of phase space 
"enclosed" by this path is (l/N!)(27rE/a)) N . 

43. A gas of N point particles, with negligible (but not zero) colli- 
sion interactions, enclosed in a container of volume V, has a total en- 
ergy U. Show that the system point for the gas may be anywhere on a 

surface in phase space which encloses a volume V (277 mU) / 

(3N/2) !. For an ensemble of these systems to represent an equilibrium 
state, how must the system points of the ensemble be distributed over 
this surface ? 

44. A system consists of three distinguishable molecules at rest, 
each of which has a quantized magnetic moment, which can have its z 
component +M, 0, or -M. Show that there are 27 different possible 



262 THERMAL PHYSICS 



states of the system; list them all, giving the total z component M z i 
of the magnetic moment for each. Compute the entropy S = -kEf j 
x ln(fi) of the system for the following a priori probabilities fj: 

(a) All 27 states equally likely (no knowledge of the state of the 
system). 

(b) Each state is equally likely for which the z component M z of 
the total magnetic moment is zero; f i = for other states (we know 
that M z = 0). 

(c) Each state is equally likely for which M z = M; fj_ = for all 
other states (we know that M z = M). 

(d) Each state is equally likely for which M z = 3M; f j = for all 
other states (we know that M z = 3M). 

(e) The distribution for which S is maximum, subject to the re- 
quirements that Efi = 1 and the mean component EfiM z i is equal to 
yM. Show that for this distribution 



f i = exp [(3M - M zi ) of/(l + x + x 2 ) 3 ] 



where x = e aM (a being the Lagrange multiplier) and where the value 
of x (thus of a) is determined by the equation y = 3(x 2 - l)/(l+x + x 2 ). 
Compute x and S for y - 3, y = 1, and y = 0. Compare with a, b, c, 
and d. 

45. Suppose the atoms of the crystal of Eqs. (20-6) are sufficiently 
"decoupled" so that it is a better approximation to consider the sys- 
tem as a collection of v - 3N harmonic oscillators, all having the 
same frequency co. Show that the partition function in this case is 

Eo/kT//, --ha;/kT\ 3N 



Z = e " 
where 



yji-e-WkT) 



E = [(V - V ) 2 /2kV ] + |NBw 

Compute the entropy, internal energy, and heat capacity of this system 
and obtain approximate formulas for kT small and also kT large com- 
pared to "hcu. For what range of temperature do these formulas differ 
most markedly from those of Eq. (20-14)? 

46. The atoms of the crystal of Problem 45 have spin magnetic mo- 
ments, so that the allowed energy of interaction with the magnetic field 
of each atom is JCM z = ± (l/2)m5c, the magnets being parallel or anti- 
parallel respectively to the magnetic field 3C. Show that for this system 
the canonical ensemble yields the following expression for the Helm- 
no ltz function: 

F = E + 3NkT in (l - e"^ /kT ) - km* - NkT ln(l + e - mCK / kT ) 



PROBLEMS 263 

— E + 3NkT ln(ftwAT) - NkT In 2 - (Nm3C 2 /8kT) 

kT ^> Ho) and m3C 

Compute S, C v , and U for the high- temperature limit and compare 
the results with Problem 19. What is the magnetization snx for this 
system ? What is the rate of change of T with respect to OC for adia- 
batic demagnetization of this crystal at constant volume? 

47. Use the final result of Problem 42 to show that the entropy of 
N distinguishable harmonic oscillators, according to the microcanon- 
ical ensemble (every system in the ensemble has energy NkT) is 

S = -Nk[l + ln (kT/hw)] 

48. For the solid described by Eq. (20-16) show that P =[(V - V)/ 
*V ] + (>U D /V), where U = [(V - V) 2 /2/cV ] + U D and y = (-V/6) x 
(d#/dV). Thence show that, for any temperature, if y is independent 
of temperature, the thermal expansion coefficient (3 is related to y 
by the formula 

13 = (l/V)0V/3T) p = kOP/3T) v = (KyCy/V) 

Constant y is called the Gruneisen constant. 

49. A system consists of a box of volume V and a variable num- 
ber of indistinguishable (MB) particles each of mass m. Each particle 
can be "created" by the expenditure of energy y; once created it be- 
comes a member of a perfect gas of point particles within the volume 
V. The allowed energies of the system are therefore ny plus the ki- 
netic energies of n particles inside V, for n = 0, 1, 2, .... Show that 
the Helmholtz function for this system (canonical ensemble) is 



F = kT In 



£ (v^ n /nl) 



n = 



■kTVX 



where X = (27rmkT/h 2 ) 3/2 e y ' . Calculate the probability that ^par- 
ticles are present in the box and thence obtain an expression for N, 
the mean number of free particles present as a function of y, T, and 
V. Also calculate S, C v , and P from F and express these quantities 
as functions of N, T, and V. 

50. What fraction of the molecules of H 2 gas are in the first excited 
rotational state (i= 1) at20°K, at 100°K, and at 5000°K? What are 
the corresponding fractions for 2 gas ? What fraction of the mole- 
cules of H 2 gas are in the first excited vibrational states (n = 1) at 
20 °K and 5000 °K? What are the corresponding fractions for Q 2 gas ? 



264 THERMAL PHYSICS 

51. Plot the heat capacity C v , in units of Nk for 2 gas, from 100 
to 5000 °K. 

52. The solid of Eqs. (20-14) sublimes at low pressure, at a subli- 
mation temperature T s which is large compared to 6, the resulting 
vapor being a perfect diatomic gas, with properties given by Eqs. 
(21-14) and (22-2) (where rot < T s <0 v ib). Show that the equation 
relating T s and the sublimation pressure P s is approximately 

G S * V P S + f Nktf + 3NkT s ln(0/T s ) - NkT s 

= G g - NkT s ln(P s V o T s 7/2 /N o k0 5 / 2 ) - N kT s 

where the equation 

N = V (47rIeke/h 2 )(27Tmke/h 2 ) 3/2 

defines the constant N . Since V « V g = NkT s /P s and 6 <C T s , show 
that this reduces to 

p s « N k Vre/v 

Also show that the latent heat of sublimation 
L s = T s (S g - S s ) - |NkT s 

53. Work out the grand canonical ensemble for a gas of point atoms, 
each with spin magnetic moment, which can have magnetic energy 

+ (1/2) /u JC or -(1/2) /iJC in a magnetic field 3C in addition to its kinetic 
energy. Obtain the expression for N and expressions for Q , jul, S, U, 
C v , and the equation of state, in terms of N, T, and 5C. How much 
heat is given off by the gas when the magnetic field is reduced from 3C 
to zero isothermally, at constant volume ? 

54. A system consists of three particles, each of which has three 
possible quantum states, with energy 0, 2E, or 5E, respectively. 
Write out the complete expression for the partition function Z for 
this system: (a) if the particles are distinguishable; (b) if the parti- 
cles obey Maxwell- Bo ltzmann statistics; (c) if they obey Einstein- 
Bose statistics; (d) if they obey Fermi- Dirac statistics. Calculate the 
entropy of the system in each of these cases. 

55. The maximum intensity per unit frequency interval, in the sun's 
spectrum, occurs at a wavelength of 5000 A. What is the surface tem- 
perature of the sun? 

56. Show that, for Einstein-Bose particles (bosons) 



PROBLEMS 265 

S=-k? [n± In (hi) - (1 + hi) In (1 + hi)] 
i 

57. It has been reported that the fission bomb produces a tempera- 
ture of a million °K. Assuming this to be true over a sphere 10 cm in 
diameter: (a) What is the radiant-energy density inside the sphere? 
(b) What is the rate of radiation from the surface? (c) What is the 
radiant flux density 1 km away? (d) What is the wavelength of maxi- 
mum energy per unit frequency interval ? 

58. The Planck distribution can be obtained by considering each 
standing electromagnetic wave in a rectangular enclosure (L x L y L z ) 
as a degree of freedom, with coordinate Q„ proportional to the ampli- 
tude of the electric vector, with momentum V v proportional to the 
amplitude of the magnetic vector and with a field energy, correspond- 
ing to a term in the Hamiltonian, equal to 27rc 2 Pf,+ (cof;/87rc 2 )Q^, where 
c is the velocity of light and where the allowed frequency of the v-th 
standing wave is given by 

wj, = tt 2 c 2 [(VL x ) 2 + (m^/Ly) 2 + (n„/L z ) 2 ] 

(because of polarization, there are two different waves for each trio 
ky, m^, n^). Use the methods of Eqs. (20-4) to (20-11) to prove that 
the average energy contained in those standing waves with frequencies 
between a> and a> + dco is dE = (H/7T 2 c 3 ) a) 3 da>/(e' n ^/ kT - 1). Compare 
this derivation with the one dealing with photons, which produced Eq. 
(25-3). 

59. Analyze the thermal oscillations of electromagnetic waves 
along a conducting wire of length L. In this case of one- dimensional, 
standing waves, the n-th wave will have the form cos (7rnx/L)e iu)t , 
where cu = 2iif = 7rnc/L, c being the wave velocity and f the frequency 
of the n-th standing wave. Use a one- dimensional analogue of the 
derivation of Problem 58 to show that the energy content of the waves 
with frequencies between f and f + df is [2Lhf df/c(e M A T - 1)] . If 
the wire is part of a transmission line, which is terminated by its 
characteristic impedance, all this energy will be delivered to this im- 
pedance in a time 2L/C. Show, consequently, that the power delivered 
to the terminal impedance, the thermal noise power, in the frequency 
band df at frequency f is hf df/(e nf / kT - 1). Show that this reduces 
to the familiar uniform distribution (white noise) at low frequencies, 
but that at high frequencies the power drops off exponentially. Below 
what frequency is the noise power substantially "white" at room tem- 
peratures (300 °K)? 

60. A container of volume V has N short-range attractive centers 
(potential wells) fixed in position within the container. There are also 
bosons within the container. Each particle can either be bound to an 



266 THERMAL PHYSICS 

attractive center, with an energy -y (one level per center), or can be 
a free boson, with energy equal to its kinetic energy, E. Use the anal- 
ysis of Chapter 30 to show that the equation relating the mean number 
N of bosons to their chemical potential /i is 

N = Q (y - M )/kT : 1 + 1.129N f 1/2 (-); N = g v(-^-) 

Draw curves for -ji/kT as a function of N /N for N/N = 1 and for 
y/kT = 0.1 and 1.0, using Table 25.1. Draw the corresponding curves 
for PV/NkT. 

61. Suppose the particles of Problem 60 are MB particles instead 
of bosons. Calculate the partition function Z for a canonical ensem- 
ble and compare it with the Z for Problem 49. 

62. Show that, for Fermi- Dirac particles (fermions), 



S = - -T [fli In (hi) + (1 - hi) In (1 - hi)] 



63. The conduction electrons of Problem 30 are, of course, fer- 
mions. Show that, for FD statistics, the thermionic emission current 
from the metal surface at temperature T is proportional to 

T 2 exp(-e0/kT), where 0= W - \x « w - jlx is called the thermi- 
onic work function of the surface. 

64. The container and N attractive centers of Problem 60 have N 
fermions, instead of bosons, in the system. By using Eqs. (26-7) and 
(26-8) show that the equation relating \x and T and V is 

N = ( _ ^ kT + N 7? (mAT), N as in Problem 60 

Plot jx/kT as function of N /N for y/kT = 0.1 and 1.0, using Table 
26-1. Draw the corresponding curves for PV/NkT. 

65. Calculate the heat capacity of D 2 as a function of T/6 T0 ^ from 
Otol. 

66. The Schrodinger equation for a one-dimensional harmonic os- 
cillator is 

Its allowed energies and corresponding wave functions are 



E n =J h(jj 



(-4) 



PROBLEMS 267 

i// n (x) = ]/moj/2 n nl^^ H n (xlt/R) exp (-ma;x 2 /2Ti) 

where H (z) = 1, H^z) = 2z, H 2 (z) = 4z 2 - 2, ^(z) = 8z 3 - 12z, etc. 

Two identical, one-dimensional oscillators thus have a Schrodinger 
equation 

H(x,y)* = - £ (£ 2 + $) * + ± m u,*(x* + y*)* = E* 

where x is the displacement of the first particle from equilibrium 
and y that of the second. 

(a) Show that allowed solutions of this equation, for the energy 
liu>(n + 1/2), may be written either as linear combinations of the prod- 
ucts ip m (x)i// n _ m (y) for different values of m between and n, or 
else as linear combinations of the products 



(b) Express the solutions 



and 

for m = 0, 1, and 2 as linear combinations of the solutions \p m (x)\p n (y) 
for m,n = 0,1,2. 

(c) Which of these solutions are appropriate if the two particles 
are bosons ? Which if they are fermions ? 

(d) Suppose the potential energy has an interparticle repulsive 
term -(l/2)m/t 2 (x - y) 2 (where k 2 < a; 2 ) in addition to the term 
(l/2)mcD 2 (x 2 + y 2 ). Show that, in this case, the allowed energies for 
bosons differ from those for fermions. Which lie higher and why? 



Constants 



Gas constant R = 1.988 kg-cal/mole-deg. 

= 8.314 x 10 3 joules/mole-deg. 
Avogadro's number N (No. per kg mole) = 6.025 x 10 26 
Number of molecules per m 3 at standard conditions = 2.689 x 10 25 
Standard conditions: T = 0°C = 273.16°K, P = 1 atm 
Volume of perfect gas at standard conditions = 22.41 m 3 /mole 
Dielectric constant of vacuum € = (1/9) x 10 -9 farads /m 
Electronic charge e = 1.602 x 10" 19 coulomb 
Electron mass m = 9.106 x 10" 31 kg 
Proton mass = 1.672 x 10" 27 kg 
Planck's constant h = 6.624 x 10" 34 joule-sec 

fi= (h/27i) = 1.054 x lO" 34 
Boltzmann's constant k= 1.380 x 10~ 23 joule/°K 
Acceleration of gravity g = 9.8 m/sec 2 

1 atm = 1.013 x 10 5 newtons/m 2 

1 cm Hg = 1333 newtons/m 2 

1 new ton = 10 5 dynes 

1 joule = 10 7 ergs 

1 electronvolt (ev) = 1.59 x 10 -19 joules 

= k(7500°K) 
Velocity of 1 ev 

electron = 5.9 x 10 5 m/sec 

1 kg-cal = 4182 joules 

Ratio of proton 
mass to elec- 
tron ma ss = 1836 

v = V8kT/7im = 1.96 x 10 4 m/sec for electrons at T = 1°K 

= 146 m/sec for protons at T = 1°K 
h 2 N 2/3 /27imk = 3961 °K -m 2 for electrons, = 2.157°K- m 2 for protons 

"= 2.49xl0 5 m 3 for electrons 
.= 3.17 m 3 for protons [seeEq. (24-11)] 



268 



h 3 N /(27imkT) 3/2 = V b /T 3 / 2 ; V b 



CONSTANTS 269 

(Mo AT)(VN /N) 2 / 3 = (h72mkT)(3N /8tf) 2 /3 = A d /T 

A d = 3017 m 2 for electrons [see Eq. (31-3)] 



Glossary 



Symbols used in several chapters are listed here with the numbers 
of the pages on which they are defined. 



a Van der Waals' constant, H 

18, 193 H 
a Stefan's constant, 55, 225 

A area, 10 3C 

b Van der Waals' constant, I 

18, 193 J 

(B magnetic induction, 15 k 

c specific heat, 15 L 

C heat capacity, 15 m 

d perfect differential, 21 M 

d imperfect differential, 22 am 

D Curie constant, 20, 115 n 

D diffusion constant, 121, 133 nj 

D(x) Debye function, 178 N 
e = 2.7183, nat. log. base 

e charge on electron, 118 N 

E energy of system, 145 p 

8 potential difference, 83 pi 

8 electric intensity, 118 P 

f distribution function, 91, (P 

112, 141 q 

F Helmholtz' function, 62, Q 

165 Q 

JF Faraday constant, 83 r 

g multiplicity of state, 167, r 

227 R 

G Gibbs' function, 63 S 

h Planck's constant, 55 t 

•h = h/2?r t c 

hi scale factor, 108 T 



enthalpy, 52, 60 
Hamiltonian function, 109, 

141, 239 
magnetic intensity, 15 
amount of information, 195 
torsion in bar, 15 
Boltzman constant, 17, 147 
latent heat, 70 
mass of particle, 10 
mobility, 119 
magnetization, 15 
number of moles, 14 
occupation number, 208 
number of particles, 10, 

209 
Avogadro's number, 13 
momentum, 96, 109, 141 
partial pressure, 80 
pressure, 9, 165 
magnetic polarization, 15 
coordinate, 108, 141 
heat, 8 

collision function, 104, 116 
internuclear distance, 190 
position vector, 103 
gas constant, 16 
entropy, 40, 45, 147, 165 
time, 104 

relaxation time, 116 
temperature, 7, 17, 38 



270 



GLOSSARY 



271 



U internal energy, 9, 25, 59 
U drift velocity, 119 
v velocity, 10 
V volume, 10 
W work, 23 
x,y,z coordinates, 103 
Z normalizing constant, 110 
Z partition function, 165, 182 
3 grand partition function, 
204 



a Lagrange multiplier, 151, 

164 
jS thermal expansion coef- 

ficient, 19 
y C p /C v , 19 

€ particle energy, 181, 208 

7] heat efficiency, 34 
77 viscosity, 131 
6 Debye temperature, 178 

k compressibility, 19 

A mean free path, 102 

li chemical potential, 16, 45, 

79, 205 
li magnetic moment, 113 
li permeability of vacuum, 15 



77 

P 
a 
o 

7 

*(q) 



X 

W J 



b 

In 
n! 



number of degrees of free- 
dom, 108 
quantum number, 145, 208 
stoichiometric coefficient, 

78 
= 3.1416 

density, 128, 235 
standard deviation, 90 
collision cross section, 

101 
mean free time, 102 
potential energy, 105 
number of degrees of 

freedom, 141 
magnetic susceptibility, 15 
concentration, 80 
wave function, 238 
277 (frequency), 55, 170 
oscillator constant, 124, 

175 
grand potential, 63, 204 
approximately equal to 
average value, 89 
partial derivative, 20 
natural logarithm, 44 
factorial function, 94, 156, 
159 



Index 



Absolute zero, 17 

Adiabatic process, 29, 34, 45, 

230, 251, 259 
Antisymmetric wave functions, 

241, 267 
Average energy, 110, 164, 203 

kinetic energy, 10, 123 

value, 89 
Avogadro's number, 13, 260,268 

Binomial distribution, 89, 256 
Black- body radiation, 54, 221, 

265 
Boltzmann constant, 17, 147, 268 

equation, 104, 116 

Maxwell- (see Maxwell- Boltz- 
mann distribution) 
Bose-Einstein statistics, 212, 

264 
boson, 212, 220, 241, 266 

Calculations, thermodynamic, 64 
Canonical ensemble, 163, 202, 262 
Capacity, heat, 14, 113, 172, 197, 
201, 231 
at constant magnetization, 29, 

56, 255 
at constant pressure, 15, 28, 

65, 251 
at constant volume, 15, 28, 65, 
113, 236, 247, 255 
Carnot cycle, 33, 40, 252 



Centigrade temperature scale, 

7, 16 
Chemical potential, 16, 45, 56, 

67, 79, 203, 211, 234 
Chemical reactions, 78 
Clausius' principle, 37 
Clausius-Clapeyron equation, 

71, 256 
Collision, 97, 101 
Collision function, 104, 116 
Compressibility, 19, 112, 155, 179 
Concentration, 80 
Condensation, Einstein, 228, 232 
Conditional probability, 68, 131 
Conductivity, electric, 118, 260 
Continuity, equation of, 104, 142 
Critical point, 74 
Cross section, collision, 101,259 
Crystal, 111, 154, 170, 262 
Curie's law, 20, 56, 113 
Cycle 

Carnot, 33, 40 

reversible, 34 

Debye formulas for C v , 177,234 
Debye function, 178 
Degenerate state, 216, 230, 234 
Degrees centigrade, 7, 16 
Degrees of freedom, 108, 222 
Degrees Kelvin, 17, 38 
Density 

fluctuations, 127, 206 



272 



INDEX 



273 



Density, probability, 91 
Dependent variables, 6 
Derivative, partial, 20, 65 
Deuterium, 248 
Deviation, standard, 90, 98 
Diatomic molecule, 195 
Differential 

imperfect, 22 

perfect, 21 
Diffusion, 120, 133, 259 
Dirac (see Fermi-Dirac statis- 
tics) 
Distinguishable particles, 186, 

210 
Distribution function, 89, 112, 
141, 147, 216 

binomial, 89, 256 

geometric, 218, 256 

Maxwell, 98, 160, 258 

Maxwell- Boltzmann, 103, 105, 
217, 233, 259 

momentum, 94 

normal, 93, 98 

Planck, 222, 265 

Poisson, 91, 206, 218 
Drift velocity, 98, 119, 260 

Efficiency of cycle, 34 
Einstein (see Bose-Einstein sta- 
tistics) 

condensation, 231 

formula for C v , 171 
Electric conduction in gases, 

118, 260 
Electrochemical processes, 82 
Electromagnetic waves, 54, 221, 

265 
Electrons, 213, 234, 244, 266 
Energy, 9 

density, 54, 222 

distribution in, 103, 165, 217, 
233 

Hamiltonian, 109, 141, 239,261 

internal, 9, 12, 23, 59, 155, 
164, 204, 251 

kinetic, 10, 108, 183 

magnetic, 16, 125 



Energy, potential, 105, 123 

rotational, 195 

vibrational, 112, 155, 200 
Engine, heat, 33 
Ensemble, 141, 147, 154, 163 

canonical, 163, 202, 262 

grand canonical, 202, 264 

microcanonical, 154, 261 
Enthalpy, 52, 60 
Entropy, 40, 69, 76, 147, 165,203 

of mixing, 49, 254 

of perfect gas, 45 

of universe, 44, 255 
Equation of state, 16, 18, 64, 165, 

179, 193, 204, 250 
Equilibrium constant, 81 

state, 6, 59 
Equipartition of energy, 110, 122 
Euler's equation, 42, 64 
Evaporation, 72 
Expansion coefficient, thermal, 

19, 250 
Expected value, 89 
Extensive variables, 14, 41, 59 

Factorial function, 94, 156, 159 
Fermi-Dirac statistics, 213, 233, 

264 
Fermion, 212, 233, 242, 266 
Fluctuations, 122, 206, 216, 226, 

260, 265 
Fokker- Planck equation, 133 

Gas 

boson, 220 

constant, R, 16, 268 

fermion, 233 

fluctuations in, 127, 206 

paramagnetic, 56 

perfect, 10, 17, 30, 45, 51, 157, 
205, 252 

photon, 54, 221, 265 

statistical mechanics of, 181 

Van der Waals, 18, 48, 193, 
251, 255 
Geometric distribution, 218, 256 
Gibbs' function, 63, 70, 77, 203 



274 



INDEX 



Gibbs-Helmholtz equation, 83 
Grand canonical ensemble, 202, 

264 
Grand partition function, 203, 

209 
Grand potential, 63, 204, 211, 

213 
Griineisen constant, 263 

Hamiltonian function, 109, 141, 

239, 261 
Hamilton's equations, 142, 173 
Heat, 8, 24 

capacity, 14, 28, 56, 65, 113, 
231, 236, 247 

engine, 33 

reservoir, 26, 29, 34 

specific, 15, 172, 197, 201,229 
Heisenberg principle, 145 
Helium, 212, 232, 244 
Helmholtz function, 62, 165, 178, 

189, 204 
Hydrogen gas, 196, 200, 245 
Hyper sphere, volume of, 159 

Independent variables, 6 
Indistinguishability, 185 
Information, 147 
Integrating factor, 22, 40 
Intensive variables, 14, 41 
Interaction between particles, 

190 
Internal energy, 9, 12, 23, 59, 

155, 164, 204, 251 
Inversion point, 52 
Irreversible process, 42 
Isothermal process, 29, 33 

Joule coefficient, 47 

experiment, 43, 46 
Joule- Thomson coefficient, 51 

experiment, 51 

Kelvin, degrees, 17, 38 
Kelvin's principle, 35 
Kinetic energy, 10, 108, 183 



Kinetic energy, per degree of 
freedom, 110, 122 

Lagrange multipliers, 150, 164, 

203 
Langevin equation, 131 
Latent heat, 70 
Law of mass action, 81 
Law of thermodynamics 

first, 25 

second, 32, 38 
Legendre transformation, 61 
Liouville's theorem, 131 

Macrostate, 141 
Magnetic induction, 15 
Magnetic intensity, 15, 20 
Magnetization, 15, 20, 30, 113, 

263 
Mass, 10 

Mass action, law of, 81 
Maxwell distribution, 90, 160, 

258 
Maxwell/s relations, 60, 62 
Maxwell- Boltzmann 

distribution, 102, 165, 217, 
233, 259, 264 

particles, 209, 211, 217, 231 
Mean free path, 102 

time, 102, 257 
Melting, 69 

latent heat of, 70 
Metals, 180, 234, 244, 266 
Metastable equilibrium, 243 
Microcanonical ensemble, 154, 

261 
Microstate, 5, 141, 146 
Mixing, entropy of, 49, 254 
Mobility, 119, 121 
Moment, magnetic, 113, 262 
Momentum, 11, 55, 142, 173 

distribution (see Maxwell- 
Boltzmann distribution) 

of photon, 55 

space, 96 
Multiplicity, 167, 185, 227, 233 



INDEX 



275 



Natural variables, 41, 57 
Newton, 9, 268 

Noise, thermal, 127, 206, 265 
Normal distribution, 93, 98 
Normal modes of crystal, 175 
Normalizing constant, 110, 238 
Numerator, bringing to, 65 

Occupation numbers, 208, 218, 

220 
Ohm's law, 119 
Omega space, 176 
Or tho hydrogen, 245 
Orthogonal functions , 238 
Oscillator, simple, 111, 154, 170, 
261 

fluctuations of, 125 
Oxygen gas, 102, 196, 200 

Parahydrogen, 245 
Paramagnetic substance, 16, 20, 

30, 56, 113, 251, 255 
Partial derivative, 20, 65 
Particle states, 185, 208 
Partition function, 165, 182, 203, 

209 
Path, mean free, 102 
Pauli principle, 213, 218, 242 
Perfect differential, 21 
Perfect gas, 17, 51, 96, 152, 157, 
205 

of point particles, 10, 30, 45 
Phase, change of, 68 
Phase space, 103, 141, 144 
Phonons, 174 

Photon gas, 54, 212, 221, 265 
Planck distribution, 222, 265 
Planck's constant, 55, 222, 268 
Poisson distribution, 91, 206, 

218 
Polarization, magnetic, 15, 114 
Potential 

chemical, 16, 45, 56, 67, 79, 
203, 211, 234 

energy, 105, 123, 191, 199 

grand, 63, 204, 211, 213 



Potential, thermodynamic, 59, 

64 
Pressure, 9, 55, 75, 106, 179, 
193, 230, 235 

atmospheric, 259, 268 

partial, 73, 80 

radiation, 55, 225 

vapor, 73 
Probability, 87, 149 
Process 

adiabatic, 29, 34, 45, 230, 251, 
259 

electrochemical, 82 

irreversible, 43 

isothermal, 29, 33 

quasistatic, 24, 27 

reversible, 34, 40 

spontaneous, 42 
Protons, 213, 243 

Quantum states, 144, 167 

statistics, 208, 239 
Quasistatic process, 24, 27 

Radiation, thermal, 54, 221, 265 
fluctuation in, 226, 265 

Random walk, 90, 129 

Rayleigh- Jeans distribution, 223 

Reactions, chemical, 78 

Relaxation time, 116 

Reservoir 

heat, 26, 29, 34 
work, 27 

Reversible process, 34, 40 

Rotational partition function, 195 

Sakur- Tetrode formula, 189 
Scale 

centigrade, 7, 16 

factor, 108 

Kelvin, 17, 38 

thermodynamic, 38, 40 
Schrodinger's equation, 144, 183, 

238, 266 
Solid state, 19, 68, 111, 154, 170, 

266 



276 






INDEX 



Space 

momentum, 96 

phase, 103, 141, 144 
Specific heat, 15, 172, 197, 201, 

229, 236, 247 
Spin, 227, 233 
Spontaneous process, 42 
Standard deviation, 90, 98 
State 

equation of, 16, 18, 64, 165, 
179, 193, 204, 250 

macro, 141 

micro, 5, 141, 146 

system, 185, 208 

variables, 6, 13 
Stefan's law, 55, 225 
Stirling's formula, 95, 156 
Stokes' law, 131 
Stoichiometric coefficient, 78 
Sublimation, 74, 264 
Susceptibility, magnetic, 15 
Symmetric wave functions, 240, 

267 
System state, 185, 208 

Taylor's series, 105 
Temperature, 7 

Debye, 178 

of melting, 70 

scale, 7, 17, 38, 40 

thermodynamic, 38, 40 

of vaporization, 72 
Time, mean free, 102, 257 
Thermionic emission, 257, 266 
Translational partition function, 

183 



Transport phenomena, 116 
Triple point, 74, 256 

Ultraviolet catastrophe, 223 
Uncertainty principle, 145 
Universe, entropy of, 44, 255 

Van der Waals' equation, 18, 48, 

198, 251, 255 
Van't Hoff 's equation, 82, 256 
Vapor pressure, 73 
Vaporization, latent heat of, 72 
Variables 

dependent, 6 

extensive, 14, 41, 59 

independent, 6 

intensive, 14, 41 

state, 6, 13 
Variance, 90, 99 
Velocity distribution, 96, 160, 
257 

drift, 98, 119, 260 

mean, 100 

mean square, 123 
Vibrational partition function, 199 

specific heat, 179, 200 
Vibrations 

of crystal, 112, 155, 175, 200 

of molecule, 199 
Virial equation, 18 
Viscosity, 131, 259 

Walk, random, 90, 129 
Wave functions, 183, 238 
Weight, statistical, 192 
Work, 23 
Work reservoir, 27 



3 1262 04065 7776 






PHYSICS 
SENUNAR 



* PHYSICS 



■-- 



A Series of Lecture Note 
and Reprint Volumes 

NUCLEAR MAGNETIC RFLAXATION: A Reprint 
Volume 

by N. Bloembergcn 192 pages $3.95 



S-MATRIX THEORY OF STRONG INTERACTIONS: 

A Lecture Note and Reprint Volume 

by Geoffrey F. Chew 192 pages $3.95 



QUANTUM ELECTRODYNAMICS: A Lecture Note 
and Reprint Volume 

by R. P. Feynman 208 pages $3.95 



THE THEORY OF FUNDAMENTAL PROCESC IS: 

A ^ecture Note Volume 

by R. P. Feynman 160 pages $3.95 



THE MOSSBAUER EFFECT: A Collection of Re- 
prints with an Introduction 

by Hans Frauenfelder 256 pages $3.95 



THL ftT-NY-BODr PROBLE, 1: A Lecture Note and? 
Reprint l-iirne 

by D;.vid Pines 464 pages -$3&5 



QUANTUM THEORY OF MANY-PARTICLE SYS- 
TEMS: A Lecture Note and Reprint Volume, 

by L. Van Hove, N. M. Hugenholtz, 

and L. P. Howland 272 pag>es~ $3.95 



1 Order from youi* bookseller or write to: 

W . A . BENJAMIN, INC., P U Ei L I S H E R S 

2465 Broadway, New York 25, N. Y.